AU2016201929A1 - System and method for modifying display of augmented reality content - Google Patents

System and method for modifying display of augmented reality content Download PDF

Info

Publication number
AU2016201929A1
AU2016201929A1 AU2016201929A AU2016201929A AU2016201929A1 AU 2016201929 A1 AU2016201929 A1 AU 2016201929A1 AU 2016201929 A AU2016201929 A AU 2016201929A AU 2016201929 A AU2016201929 A AU 2016201929A AU 2016201929 A1 AU2016201929 A1 AU 2016201929A1
Authority
AU
Australia
Prior art keywords
user
augmented reality
cognitive load
visual
reality content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2016201929A
Inventor
Belinda Margaret Yee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2016201929A priority Critical patent/AU2016201929A1/en
Publication of AU2016201929A1 publication Critical patent/AU2016201929A1/en
Abandoned legal-status Critical Current

Links

Abstract

SYSTEM AND METHOD FOR MODIFYING DISPLAY OF AUGMENTED REALITY A system and method of modifying display of augmented reality content are disclosed. The 5 method comprises determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user (230); and selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another (240). The method further 10 comprises modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user (250, 260). -7 /14 Obtain variation in cognitive load Identify type of content being displayed Identify visual zone/s Compare to rule 540 table for content display modification II I -550 Apply content display modification Fig. 5 11138169_1

Description

SYSTEM AND METHOD FOR MODIFYING DISPLAY OF AUGMENTED REALITY
CONTENT
TECHNICAL FIELD
[0001] The present invention relates to the display of digital content in a user’s environment. In particular, the present invention relates to a system and method for modifying the presentation of augmented reality content.
BACKGROUND
[0002] Augmented reality relates to a view of a physical world where some elements of physical reality are augmented by computer generated inputs such as graphics, sound and so on. Users are able to use hand-held, wearable or desk based devices to retrieve additional information related to a captured image of a real world object from a camera connected to the device (e.g., a camera attached to a head-mounted display, tablet computer or projector) and augment additional information to the real world object.
[0003] A typical work desk is a cluttered space. Addition of augmented reality information, also referred to as augmented reality content or augmented content, increases visual complexity of the work desk. Because augmented reality content is digitally displayed, the augmented reality content is typically displayed to the user with higher contrast and brightness than real world objects. The augmented reality content can be more visually dominant to the user. Augmented reality content may also be animated or automatically change size, shape, colour or location, which can unintentionally draw a user’s attention. In such visually complex environments a key challenge for users is to maintain focus on difficult work tasks and not be distracted by the bright, potentially animated augmented reality information around the work desk. A challenge for augmented reality content display systems is to prevent the user from being distracted by the augmented reality content or the complex work environment.
[0004] A need exists for an augmented reality system that is attentive to the user’s cognitive state and to modify the environment viewed by the user (both the digital augmented reality content and the physical environment) by responding to various states of and transitions in user cognitive load to reduce visual distraction and enhance concentration.
SUMMARY
[0005] It is an object of the present disclosure to substantially overcome, or at least ameliorate, at least one disadvantage of present arrangements.
[0006] A first aspect of the present disclosure provides a method of modifying display of augmented reality content, the method comprising: determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user. According to another aspect, the sensed biometric parameter relates to dilation of pupils of the user.
[0008] According to another aspect, determining the variation in cognitive load comprises determining that the user is transitioning form a relatively low cognitive load to a relatively high cognitive load.
[0009] According to another aspect, determining the variation in cognitive load comprises determining that the user is transitioning form a relatively high cognitive load to a relatively low cognitive load.
[0010] According to another aspect, determining the variation in cognitive load comprises determining that the user has a fluctuating level of cognitive load.
[0011] According to another aspect, the augmented reality content is at least one of text, animation, video, graphics and augmented content diminishing a surrounding environment from the field of view of the user.
[0012] According to another aspect, a speed of implementation of the modification of the display characteristic is adjusted based upon the determined variation of cognitive load.
[0013] According to another aspect, a size of the selected at least one visual zone is changed according to the determined variation of cognitive load.
[0014] According to another aspect, the method further comprises determining a plurality of visual zones of the user, the at least one visual zone being selected from the determined plurality of zones.
[0015] According to another aspect, the plurality of visual zones is based upon one of a distance from a focal point of the user, an area of a workspace, a proximity to a target object, and a distance from the user.
[0016] According to another aspect, modifying the display characteristic comprises moving a location of the augmented reality content from one visual zone of the user to another visual zone of the user.
[0017] According to another aspect, the augmented reality content is text and modification of the display characteristics comprises generating and displaying a summary of the text.
[0018] According to another aspect, modifying display of the augmented reality content comprises diminishing visual prominence of a portion of the surrounding environment in the field of view of the user.
[0019] According to another aspect, modifying the display characteristic of the augmented reality content comprises modifying at least one of brightness, colour, animation, contrast, luminance, transparency, location, location, size, degree of fading and angle of the augmented reality content.
[0020] According to another aspect, the augmented reality content is video content and modifying the display characteristics comprises modifying a playback speed of the video content.
[0021] According to another aspect, the modification of the daily characteristic of the augmented reality content in one selected visual zone of the user differs with respect to modification of the daily characteristic of the augmented reality content in another selected visual zone of the user.
[0022] Another aspect of the present disclosure provides a non-transitory computer readable storage medium having a computer program stored thereon for modifying display of augmented reality content, comprising: code for determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
[0023] Another aspect of the present disclosure provides an augmented reality system, configured to: display augmented reality content to a user of the augmented reality system; sense a biometric parameter of the user; determine a variation of cognitive load of a user viewing the augmented reality content according to the sensed biometric parameter of the user; select at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modify, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
[0024] Another aspect of the present disclosure provides an apparatus, comprising: a processor, a sensor for sensing a biometric parameter of a user of the apparatus; and a memory, the memory having instructions thereon executable by the processor to: display augmented reality content to the user, and modify display of augmented reality content in the augmented reality environment, by: determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] One or more embodiments of the invention will now be described with reference to the following drawings, in which: [0026] Figs. 1 A, IB andlC form a schematic block diagram of a system upon which arrangements described can be practiced; [0027] Fig. 2 shows a schematic flow diagram for a method of adjusting display of augmented reality content; [0028] Fig. 3 shows a schematic flow diagram for a method of determining cognitive load of a user as used in the method of Fig. 2; [0029] Fig. 4 shows a schematic flow diagram for a method of determining a variation in cognitive load of the user as used in Fig. 2; [0030] Fig. 5 shows a schematic diagram for a method of modifying augmented reality content display attributes; [0031] Fig. 6 shows an example presentation of augmented content according to visual zone locations; [0032] Fig. 7A shows an example presentation of augmented content according to visual zones when the user’s eye is not dilated; [0033] Fig. 7B shows presentation of the augmented content of Fig. 7A when the user’s eye is dilated; [0034] Fig. 8 shows an example table describing changes to presentation of augmented content according to time, zone, content type and variation in cognitive load; [0035] Fig. 9 shows an example of a pupil dilation pattern as the user enters and exits a state of concentration; [0036] Fig. 10A shows an environment with a moving object and augmented content when the user’s eye is not dilated; and [0037] Fig. 10B shows the environment and augmented reality content of Fig. 10A when the user’s eye is dilated.
DETAILED DESCRIPTION INCLUDING BEST MODE
[0038] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[0039] Fig. 1A shows a projection-based augmented reality system 100 on which the arrangements described may be practiced. In the projection-based augmented reality system 100, a projector 169 is used to present augmented content 185 related to physical objects in a scene 187. The system 100 includes a computing device 101 connected to the projector 169, a camera 127 and a biometric sensor 177. The biometric sensor 177 captures biometric data from a user of the system 100 and transmits the biometric data to the computing device 101. The camera 127 retrieves a raster image of a camera field of view 180 and transmits the raster image to the computing device 101. The projector 169 receives digital data from the computing device 101 and projects augmented content 170 relative to a target object 175.
[0040] The system 100 includes a software architecture 190. The software architecture 190 may be implemented as part of a software application 133 (Fig. IB) stored on the computing device 101. The application 133 receives data from the biometric sensor 177 and the camera 127. The software architecture 190 consists of a biometric evaluation module 191 and a cognitive load calculation module 192. The modules 1941 and 192 use the data from the biometric sensor 177 to determine user cognitive load. The software architecture 190 also includes a zone selection module 193 and a content display module 194. The modules 193 and 194 determine and modify display attributes of the augmented reality content 185 based upon a determined cognitive load of the user.
[0041] In the arrangements described herein, an augmented reality information presentation system identifies a user’s cognitive state by monitoring biometric indicators such as pupil dilation, skin conductivity, and brain wave activity measured via electroencephalogram (EEG).
[0042] In one approach, the arrangements described map a speed at which the augmented reality information presentation changes according to a direction of the user’s cognitive state and the location of the augmented reality information. The direction of the user’s cognitive state may for example relate to entering or exiting a state of deep concentration.
[0043] In one approach, the arrangements described map the type of augmented reality content change to a level of the user’s cognitive state (for example, deeply concentrating, lightly concentrating, skimming and the like), the type of augmented reality content, and the location of the augmented reality content.
[0044] Some augmented reality systems monitors a user’s cognitive state to identify the boundaries between sub-tasks in order to time the presentation of notifications to minimise interruption cost. Such systems are concerned with reducing interruptions to the user and therefore increasing productivity. However, such systems are primarily concerned with identifying when a user exits a concentrating state in order to present new notification information. Such systems fail to mediate any existing digital content in the user’s environment which may cause distraction while the user is concentrating. Such systems also fail to mediate the presentation of the notifications according to other characteristics of the user’s cognitive state.
[0045] Another known approach changes a rate at which images are shown to a user in an image triage scenario, so that more images are shown to a user that is alert, and fewer images are shown when the user’s attention wanes Such approaches attentively monitor the users cognitive state, and pre-emptively modify the presentation of digital content in one location only, the image triage display area. Such approaches fail to effect the overall work area or visual environment, by modifying objects which may cause distraction.
[0046] The arrangements described herein relate to data representation technologies and more particularly, to a system and method for adjusting the visually distracting characteristics of augmented reality content. Adjustment of the augmented reality content may be implemented according to the cognitive state of the user as indicated by biometric measures. Biometric measures relate to biometric parameters or characteristics of a human body such as pupil dilation, skin conductivity, and brain wave activity measured via electroencephalogram (EEG). The human eye is a rich source of biometric data and includes measures such as; gaze direction, gaze duration, saccade length, and pupil dilation.
[0047] Pupil dilation is considered an accurate and reliable biometric measure for identifying cognitive load. When a user starts working on a difficult task, the user enters a high cognitive load state, and the user’s pupils dilate. When the user finishes the difficult task and exits the high cognitive load state, the user’s pupils contract. The degree to which an eye dilates indicates a magnitude of cognitive load. In other words, pupil size indicates a level of task difficulty.
[0048] In the example of Figs. 2-10, the biometric parameter sensed in the system 100 relates to pupil dilation. However, other biometric parameters may be used to determine cognitive load of the user such as measuring brain wave activity measured via electroencephalogram (EEG)..
[0049] Electroencephalography measures electrical activity in the brain via a system of sensors placed on the surface of the scalp. Electroencephalography is an appropriate technology for user control of computer interfaces because Electroencephalography is non-invasive and relatively inexpensive. An electroencephalography system can be used standalone or in conjunction with other biometric sensors. In the arrangements described the electroencephalography system is used standalone to identify level of cognitive load.
[0050] The spectral power, in particular, the alpha and theta band and event-related potentials (ERPs) (in particular the event-related potential P300) recorded by an electroencephalogram can be used as a measure of cognitive load. The measures can be combined to improve the accuracy of estimating cognitive load.
[0051] The electroencephalogram when used to measure the electrical activity during high cognitive load tasks produces a time series of voltage amplitudes. The time-series can be decomposed into underlying frequencies. For example, by taking a Fourier transform of the electroencephalogram data provides frequency content of the signal. The sensors can measure a number of different frequencies commonly referred to as Delta 1-4 Hz; Theta 4-8 Hz; Alpha 812 Hz; Beta 13-20 Hz.
[0052] Increased cognitive load is associated with an increase in power of the frontal-midline theta electroencephalogram signals and typically with a decrease in power of the alpha band signal. Some existing electroencephalogram systems extract theta and other frequencies and return results as a workload index.
[0053] Although brain activity has been shown to demonstrate recognizable and repeatable modulations in response to higher cognitive load states, these are not generalizable across different users on different days. For these reasons, strategies are needed to accommodate the variation results from day to day. Some implementations use feature selection techniques and machine learning algorithms to fine tune based on contexts such as user, task and day.
[0054] Another biometric parameter that can be used to measure cognitive load is skin conductivity as characterised by a user’s Galvanic Skin Response (GSR). Galvanic skin response (GSR) is a method of measuring the electrical resistance of the skin. The device measures electrical resistance between 2 points on the skin. Measuring electrical resistance of the skin typically involves sending a small amount of current through the user’s body and measuring the electrical resistance continuously (e.g. once every 100 milliseconds). For example, it is understood from previous studies that skin conductivity varies in accordance with the nature of the task. It has been shown that a task requiring a high cognitive load, the skin conductivity increases. One way in which Galvanic Skin Response can be used to measure cognitive load is to measure Galvanic Skin Response values periodically when a task is being performed.
[0055] Returning to the example of Figs. 2-10, pupil dilation is affected by various factors other than cognitive load, such as ambient light, brightness and size of viewed content, shadows, and human factors such as mood and tiredness. The arrangements described relating to pupil dilation typically baseline pupil activity for a particular context to mitigate for environmental and user factors. For example, the system 100 may include a light meter (not shown) to account for variations in environmental light. The user is typically required to complete a learning phase prior to commencing general use of the system 100. In the learning phase, reactions of the user’s pupils are monitored and recorded over time such that a baseline for behaviour of the user’ pupils to different lighting conditions and in response to different concentration scenarios is determined.
[0056] The systems and methods described use pupil dilation to evaluate the user’s cognitive load and a direction of transition of the user’s cognitive load, that is whether the user is entering or exiting a high cognitive load state. The arrangements described then adjust display attributes of the augmented reality content 185 according to this variation in cognitive load and the location of the augmented reality content 185. Some examples of display attribute changes include; animation speed, contrast, brightness, size, and transparency. The display attributes are changed according to the user’s cognitive load in order to reduce distraction and increase work productivity. A speed at which modification of the display attributes is implemented may be adjusted based upon variation is cognitive load of the user.
[0057] Figs. IB and 1C depict a general-purpose computer system 100, upon which the various arrangements described can be practiced.
[0058] As seen in Fig. IB, the computer system 100 includes: a computer module 101; input devices such as a keyboard 102, a mouse pointer device 103, a scanner 126, a camera 127, and a microphone 180; and output devices including a printer 115, a display device 114 and loudspeakers 117. An external Modulator-Demodulator (Modem) transceiver device 116 may be used by the computer module 101 for communicating to and from a communications network 120 via a connection 121. The communications network 120 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 121 is a telephone line, the modem 116 may be a traditional “dial-up” modem. Alternatively, where the connection 121 is a high capacity (e.g., cable) connection, the modem 116 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 120.
[0059] The computer module 101 typically includes at least one processor unit 105, and a memory unit 106. For example, the memory unit 106 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 101 also includes an number of input/output (I/O) interfaces including: an audio-video interface 107 that couples to the video display 114, loudspeakers 117 and microphone 180; an I/O interface 113 that couples to the keyboard 102, mouse 103, scanner 126, camera 127 and optionally a joystick or other human interface device (not illustrated); and an interface 108 for the external modem 116 and printer 115. The interface 113 also couples to the biometric sensor 177.
[0060] The arrangements described typically relate to desktop scenarios and use of wearable devices such as a head-mountable device. The biometric sensor 177 may be integral to the computer system 101, depending on the nature of the module 101 and the biometric parameter involved. For example, the sensor 177 may relate to one or more cameras for capturing images of pupils of the user. The cameras may be installed adjacent the display 114 for implementations where the module 101 relates to a desktop computer or tablet. Alternatively, the sensor 177 may relate to cameras or other optical tracking devices installed in lenses or frames if the computer module 101 relates to a head mountable display.
[0061] Alternatively, the biosensor may be a device worn by the user connected by a wired or wireless connection to the computer module 101. Examples of wearable devices include strap-on devices and clip devices and other types of device which may be worn on the human body. For example, electrodes may be placed on a scalp of a person, with a conductive gel or paste, to perform an electroencephalogram (EEG). Each electrode is connected to an amplifier to amplify the voltage between the electrode and a reference. The amplified voltages are sampled at a specific rate (e.g. 512Hz) and artefacts are removed using high/low pass filters. For measuring Galvanic Skin Response (GSR), a Galvanic Skin Response device can be used by attaching sensors to a finger of the person. The sensors are connected to a computer which records the Galvanic Skin Response values which are sampled at a given frequency. An example of a Galvanic Skin Response device is the ProComp Infiniti device provided by Thought Technology Ltd.
[0062] In some implementations, the modem 116 may be incorporated within the computer module 101, for example within the interface 108. The computer module 101 also has a local network interface 111, which permits coupling of the computer system 100 via a connection 123 to a local-area communications network 122, known as a Local Area Network (LAN). As illustrated in Fig. IB, the local communications network 122 may also couple to the wide network 120 via a connection 124, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 111 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 111.
[0063] The EO interfaces 108 and 113 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 109 are provided and typically include a hard disk drive (HDD) 110. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 112 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 100.
[0064] The components 105 to 113 of the computer module 101 typically communicate via an interconnected bus 104 and in a manner that results in a conventional mode of operation of the computer system 100 known to those in the relevant art. For example, the processor 105 is coupled to the system bus 104 using a connection 118. Likewise, the memory 106 and optical disk drive 112 are coupled to the system bus 104 by connections 119. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
[0065] The method of Figs. 2-5 may be implemented using the computer system 100 wherein the processes of Figs. 2-5 to be described, may be implemented as one or more software application programs 133 executable within the computer system 100. In particular, the steps of the method of adjusting display of augmented reality content are effected by instructions 131 (see Fig. 1C) in the software 133 that are carried out within the computer system 100. The software instructions 131 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the descrobed methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[0066] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 100 from the computer readable medium, and then executed by the computer system 100. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 100 preferably effects an advantageous apparatus for adjusting display of augmented reality content.
[0067] The software 133 is typically stored in the HDD 110 or the memory 106. The software is loaded into the computer system 100 from a computer readable medium, and executed by the computer system 100. Thus, for example, the software 133 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 125 that is read by the optical disk drive 112. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 100 preferably effects an apparatus for adjusting display of augmented reality content.
[0068] In some instances, the application programs 133 may be supplied to the user encoded on one or more CD-ROMs 125 and read via the corresponding drive 112, or alternatively may be read by the user from the networks 120 or 122. Still further, the software can also be loaded into the computer system 100 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 100 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 101. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 101 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[0069] The second part of the application programs 133 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 114. Through manipulation of typically the keyboard 102 and the mouse 103, a user of the computer system 100 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 117 and user voice commands input via the microphone 180.
[0070] Fig. 1C is a detailed schematic block diagram of the processor 105 and a “memory” 134. The memory 134 represents a logical aggregation of all the memory modules (including the HDD 109 and semiconductor memory 106) that can be accessed by the computer module 101 in Fig. IB.
[0071] When the computer module 101 is initially powered up, a power-on self-test (POST) program 150 executes. The POST program 150 is typically stored in a ROM 149 of the semiconductor memory 106 of Fig. IB. A hardware device such as the ROM 149 storing software is sometimes referred to as firmware. The POST program 150 examines hardware within the computer module 101 to ensure proper functioning and typically checks the processor 105, the memory 134 (109, 106), and a basic input-output systems software (BIOS) module 151, also typically stored in the ROM 149, for correct operation. Once the POST program 150 has run successfully, the BIOS 151 activates the hard disk drive 110 of Fig. IB. Activation of the hard disk drive 110 causes a bootstrap loader program 152 that is resident on the hard disk drive 110 to execute via the processor 105. This loads an operating system 153 into the RAM memory 106, upon which the operating system 153 commences operation. The operating system 153 is a system level application, executable by the processor 105, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
[0072] The operating system 153 manages the memory 134 (109, 106) to ensure that each process or application running on the computer module 101 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 100 of Fig. IB must be used properly so that each process can run effectively. Accordingly, the aggregated memory 134 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 100 and how such is used.
[0073] As shown in Fig. 1C, the processor 105 includes a number of functional modules including a control unit 139, an arithmetic logic unit (ALU) 140, and a local or internal memory 148, sometimes called a cache memory. The cache memory 148 typically includes a number of storage registers 144 - 146 in a register section. One or more internal busses 141 functionally interconnect these functional modules. The processor 105 typically also has one or more interfaces 142 for communicating with external devices via the system bus 104, using a connection 118. The memory 134 is coupled to the bus 104 using a connection 119.
[0074] The application program 133 includes a sequence of instructions 131 that may include conditional branch and loop instructions. The program 133 may also include data 132 which is used in execution of the program 133. The instructions 131 and the data 132 are stored in memory locations 128, 129, 130 and 135, 136, 137, respectively. Depending upon the relative size of the instructions 131 and the memory locations 128-130, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 130. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 128 and 129.
[0075] In general, the processor 105 is given a set of instructions which are executed therein. The processor 105 waits for a subsequent input, to which the processor 105 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 102, 103, data received from an external source across one of the networks 120, 102, data retrieved from one of the storage devices 106, 109 or data retrieved from a storage medium 125 inserted into the corresponding reader 112, all depicted in Fig. IB. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 134.
[0076] The disclosed arrangements use input variables 154, which are stored in the memory 134 in corresponding memory locations 155, 156, 157. The described arrangements produce output variables 161, which are stored in the memory 134 in corresponding memory locations 162, 163, 164. Intermediate variables 158 may be stored in memory locations 159, 160, 166 and 167.
[0077] Referring to the processor 105 of Fig. 1C, the registers 144, 145, 146, the arithmetic logic unit (ALU) 140, and the control unit 139 work together to perform sequences of microoperations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 133. Each fetch, decode, and execute cycle comprises: a fetch operation, which fetches or reads an instruction 131 from a memory location 128, 129, 130; a decode operation in which the control unit 139 determines which instruction has been fetched; and an execute operation in which the control unit 139 and/or the ALU 140 execute the instruction.
[0078] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 139 stores or writes a value to a memory location 132.
[0079] Each step or sub-process in the processes of Figs. 2-5 is associated with one or more segments of the program 133 and is performed by the register section 144, 145, 147, the ALU 140, and the control unit 139 in the processor 105 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 133.
[0080] The method of Figs. 2-5 may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of Figs. 2-5. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
[0081] Fig. 2 shows a schematic flow diagram of a method 200 of configuring a display of augmented reality content for a user. The method 200 may be implemented as one or more modules of the application 133, stored in the memory 106 and controlled by execution of the processor 105. The method 200 may be implemented as the modules 191, 192, 193 and 194 of the software architecture 190.
[0082] The method 200 executes to determine and modify (or adjust) display attributes of the augmented reality content 185. In the arrangements described, the brightness of the augmented reality content 185 is modified to reduce visual distraction. In modifying the brightness of the augmented reality content 185, the method 200 operates to modify the augmented reality content to adjust a degree of visual distraction of the augmented reality content to the user. The degree of visual distraction is a measure of how strongly the human eye is drawn to, or distracted by, the augmented content compared to the target content. Visual distraction relates to the attractive power of an object, such as augmented content, to draw the focus of the user. Visual distraction may be measure by how long or how often the focus of the user is drawn away from the desired focus area. For example, text in high contrast, colourful or with a flashing appearance may have a high degree of visual distraction. In contrast, text in colours similar to the background scene may have a low degree of visual distraction.
[0083] Other arrangements may operate to modify one or more other characteristics of augmented content to reduce visual distraction of the augmented reality content to the user, such as contrast, size, location, animation speed and transparency of the augmented reality content, to name a few. In the arrangements described the modification of speed and amount of change to brightness is dependent on two variables, being a direction of the user’s cognitive load (entering or exiting a high cognitive load state) and the location of the augmented content in the user’s field of view.
[0084] The location of augmented content is described in relation to Fig. 6 which shows a presentation 600 of augmented reality content displayed to a user 690. In the example of Fig. 6, as the user 690 enters a high cognitive load state, augmented content closest (650) to the user’s focus point 610, in a close visual zone 620, is not changed as the content 650 may be in use. Augmented content 660 in a mid-periphery zone 630 is modified by a relatively small amount and changes relatively slowly. Augmented content 670 in a far-periphery zone 640 is modified by a larger amount and changes faster than the content 630. The variation in amount and speed of modification is based upon characteristics of the eye of the user and a likelihood of the augmented content being in use by the user. For example, the far periphery visual zone 640 is sensitive to bright, high contrast objects. The arrangements described accordingly quickly reduce a possibility of distraction by quickly dimming the brightness of augmented content 670 in the far periphery zone 640, and dimming the content 670 to a larger extent than the content 650 or 660.
[0085] Returning to Fig. 2, the method 200 commences at a display step 210, In execution of the display step 210 the camera 127 captures and sends an image of the camera field of view 180 to the computer module 101. The content display module 194 executes on the processor 105 to identify the target object 175 and associates the target object 175 with the augmented reality content 185. The augmented content 185 is then displayed to the user adjacent to the target object 175 by the projector 169. The scene 187 including the augmented reality content 185 is viewed by the user.
[0086] The method 200 progresses under execution of the processor 105 from the display step 210 to a determining step 220. In execution of the determining step 220, the application 133 executes to determine cognitive load of the user based on biometric data received from the biometric sensor 177. In the arrangements described the biometric data is pupil dilation tracked from a gaze detection sensor. The biometric parameter sensed relates to dilation of pupils of the user. However, as discussed above, other biometric measures indicating cognitive load could also be used such as; skin conductivity and brain wave activity measured via electroencephalogram (EEG).
[0087] A method of determining cognitive load according to a sensed biometric parameter of the user, as implemented at the determining step 220, is now described with reference to a 300 as shown in Fig. 3. The method 300 may be implemented as one or more modules of the application 133, stored in the memory 106 and controlled by execution of the processor 105.
[0088] The method 300 starts at a determining step 310. In execution of the determining step 310 the application 133 determines threshold values indicating high and low cognitive load states of the user. The threshold values in some implementations are derived by comparing eye tracking data from a number of studies such as Jang et al.( Jang, Young-Min, et al, "Human implicit intent transition detection based on pupillary analysis," Neural Networks (UCNN), The 2012 International Joint Conference on Computational Intelligence, IEEE, 2012), (Brian P. Bailey and Shamsi T. Iqbal, “Understanding Changes in Mental Workload during Execution of Goal-Directed tasks and Its Application for Interruption Management”, ACM Transactions on Computer-Human Interaction, Vol. 14, No. 4, Article 21, 2008) and Beatty (Beatty, Jackson, "Task-evoked pupillary responses, processing load, and the structure of processing resources." Psychological bulletin 91.2 (1982): 276 ). Combining pupil dilation results from multiple studies will give more generalizable thresholds. In the arrangements described, only one threshold at a pupil diameter of 3.5mm is determined in the step 310. The system 100 may use other alternative methods of determining thresholds in other implementations, such as logging characteristics of a particular user’s pupil behaviour over time, identifying dilation plateaus and defining thresholds at a midway point between the identified plateaus. Logging characteristic for such implementations may for example occur in a learning period, as described above.
Alternatively, the threshold may be predetermined and stored in the memory 106. In such instances, the step 310 executes to receive one or more threshold values.
[0089] The method 300 progresses under execution of the processor from the step 310 to a measuring step 320. In execution of the step 320, pupil dilation of the user is measured by the biometric sensor 177.
[0090] The method 300 progresses under execution of the processor from the measuring step 320 to a selecting step 330. In some implementations, in execution of the selecting step 330 the application 133 selects a nearest threshold value to the measured dilations. In the arrangements described, only one threshold is identified.
[0091] The method 300 progresses under execution of the processor from the selecting step 330 to a recording step 340. In execution of the step 340 the application 133 records a number of times the user’s pupil dilation is above and below the selected threshold in a given period of time. The number of times may for example be stored temporarily in the memory 106.
[0092] The method 300 progresses under execution of the processor from the selecting step 340 to a determining step 350. In execution of the step 350 the ratio of recorded pupil dilation above and below the selected threshold is determined.
[0093] The method 300 progresses under execution of the processor from the determining step 340 to a mapping step 360. The determined ratio is mapped in execution of the step 360 to cognitive load levels indicated by the threshold identified in the step 310. In the arrangements described the single threshold indicates two levels of cognitive load, high above the threshold and low below the threshold. In execution of the step 360 the user’s ratio of pupil dilation determined at the step 350 is mapped to one of the two levels of cognitive load. The method 300 ends upon execution of the step 360.
[0094] Returning to Fig. 2, once the cognitive load is determined at the step 220, the method 200 progresses under execution of the processor 105 to a determining step 230. The application 133 executes to determine a variation in cognitive load according to the sensed biometric parameter of the user at the step 230. In the arrangements described the change (variation) in pupil dilation indicates the variation in cognitive load state. If the user’s pupil is dilating, such provides an indication that the user is transitioning from a relatively low cognitive load to a relatively high cognitive load, and entering a high cognitive load state. If the user’s pupil is contracting, such provides an indication that the user is exiting a relatively high cognitive load state, and entering a relatively low cognitive load state. Examples of other variations in cognitive load determined at the step 230 include determining whether the user is experiencing a stable cognitive load state or a fluctuating cognitive load state. If the user is skimming through documents the user’s pupil will not sustain dilation or show a clear trend toward dilation.
[0095] A method 400 of determining the variation in cognitive load, as executed at the step 230 is now described in more detail in relation to Fig. 4. The method 400 may be implemented as one or more modules of the application 133, stored in the memory 106 and controlled by execution of the processor 105.
[0096] The method 400 begins at an identification step 410. The step 410 executes to identify the cognitive load as mapped in execution of the step 360 (Fig. 3).
[0097] The method 400 progresses from the step 410 to an identifying step 420. Execution of the step 420 records the cognitive load determined at step 360 over a period of time, for example by storing the determined cognitive load in the memory 106. The step 420 executes to determine or identify a sustained change in the user’s cognitive load. When a change in cognitive load is identified in step 420, the method 400 progresses under execution of the processor 105 to a comparison step 430. In execution of the step 430, the stored load data is compared to cognitive load patterns.
[0098] Fig. 9 shows an example cognitive load pattern 900. In the example of Fig. 9 the pattern 900 has a shallow positive gradient 930 as the user enters a high cognitive load state as indicated by a graph 910. The pattern 900 comprises a steep negative gradient 940 as the user exits a high cognitive load state as indicated by a graph 920. Comparing recorded cognitive load variations to the cognitive load patterns enables the application 133 at the step 430 to identify the variation in cognitive load at step 230. In other implementations, other methods of identifying variation in cognitive load may be used. For example one method of identifying variation in cognitive load combines a change in an average pupil dilation of the user with a change in gaze location behaviour to indicate a change in task, e.g., reading to browsing, and therefore a change in cognitive load state from high cognitive load to low cognitive load state.
[0099] Returning to Fig. 2, the method 200 progresses from the determining step 230 to a selection step 240. In execution of the step 240 at least one visual zone is selected. Each selected zone relates to the field of view of the user.
[00100] In some implementations, a number of visual zones are determined, and at least one of the visual zones is selected from the determined number of zones. The number of zones may be determined based upon one of a distance from a focal point of the user, an area of a workspace, a proximity to the target 175, and a distance from the user.
[00101] Referring to Fig. 6 for example, one of the zones 620, 630 and 640 is selected. The zones 620, 630 and 640 have different visual acuity to one another. The zones are identified by execution of the zone selection module 193 according to a determined distance from the focal point 610 of the user. In the arrangements described the zones refer to visual zones, being the close visual zone 620, the mid-periphery zone 630 and the far-periphery zone 640. The zones correlate with focal zones of the user respectively, being a para-foveal periphery, a near periphery and a far periphery. Each of the focal zones is known to have different characteristics such as visual acuity, sensitivity to particular colours, contrast or movement. In the arrangements described the augmented content is adjusted (modified) according to the visual zones 650, 660 and 670 in which the augmented content is located. However other methods of identifying zones could be used in other implementation. Parameters such as distance of display of the augmented content from the user, location of the augmented content on the workspace or proximity to the target object, could be used to define zones, as described above.
[00102] The method 200 progresses under execution of the processor 105 from the selecting step 240 to a modifying step 250. In execution of the step 250, one or more display attributes, also referred to as display characteristics, of the augmented content 185 are modified by operation of the content display module 194. The modification of display characteristics applies to augmented reality content within the zones selected at step 240. The modification of display characteristics adjusts the degree of visual distraction of the augmented reality content to the user.
[00103] A method 500 of adjusting content display attributes, as executed at the step 250, is described in more detail in relation to Fig. 5. The method 500 may be implemented as one or more modules of the application 133 stored in the memory 106 and controlled by execution of the processor 105.
[00104] The method 500 starts at an identifying step 510. In execution of the step 510 the variation in cognitive load determined at step 230 is identified. The method 500 progresses to an identifying step 520. In step 520 a type of augmented content being shown is identified by the content display module 194. The augmented content type may include text, animation, video, graphics or augmented content designed to diminish or cover up the real environment surrounding the user. Augmented content designed to diminish or cover up the real environment may be used to attenuate or eliminate the degree of visual distraction of the real environment. The type of content identified affects how the augmented content display attributes are changed to reduce distraction. For example a video can be modified by having playback speed modified, or the volume of the associated with audio turned down. However changes such as reducing volume and slowing playback cannot be applied to graphics such as text.
[00105] In the arrangements described hereafter, the content type is text. The characteristics of text that can be adjusted include brightness, contrast with background, size, colour, location, angle, animation, degree of fading and detail. Detail of text relates to an amount of information contained in the text. Detail of text could relate to a high level heading summary or a low level detailed description. Modifying detail of text may comprise displaying a summary of the text. Modifying location of augmented reality content may comprise changing location of the augmented reality content within one visual zone or moving the locate of the augmented reality content to another visual zone. In the arrangements described the brightness of the text is modified so that the augmented content is less distracting when the user is in a high cognitive load state. Angle of a text object relates to an angle formed by the text relative to a boundary of the scene 187.
[00106] The method 500 progresses from the step 520 to an identifying step 530. In step 530 a visual zone is identified by obtaining data from step 240 in which the zone was selected. Once the direction of the variation in cognitive load has been determined (step 510), the type of content identified (step 520) and the visual zone identified (step 530) the method 500 progresses to a compare step 540. Data obtained and identified in the steps 510 to 530 is compared in step 540 to a rule table to determine how the augmented content is to be adjusted.
[00107] Fig. 8 shows an example rule table 800. The rule table may be determined experimentally or based upon one of research data or the learning period of the user. In the arrangements described the variation in cognitive load, indicated by a column 810 is identified as increasing and the content type, indicated by a column 820 is text. The table 800 indicates that the Change type, shown by a column 850 should be to dim, that is reduce the brightness of the augmented content. The table 800 indicates that the change is not uniformly applied to all visual zones. In other words, modification of a display characteristic for augmented reality content in one visual zone can differ in relation to modification of the display characteristic for augmented reality content in another visual zone. In particular, the Time to change, shown in a column 840 and the Amount of change, shown in a column 860, varies according to the zone (shown in a column 830). The non-uniform change of the column 860 is designed to reduce user distraction by dimming the augmented content which is most distracting, that is the content in zone 3, the far-periphery zone 640. The rule table 800 does not dim the content in zone 1, the near visual zone 620 because content in the zone 620 may be in use. The table 800 indicates in zone 1 (corresponding to the close visual zone 620), that the amount of change is 0%, no change. In zone 2, which corresponds to the mid-visual zone 630, the modification occurs at a medium pace (2 seconds) and partially dims to 40%. In zone 3, which corresponds to the far-periphery visual zone 640 the change occurs relatively quickly (1 second) and the augmented content is substantially dimmed to 80%.
[00108] The arrangements described above refer to a user who is entering a high cognitive load state. If the user is exiting a high cognitive load state, the changes to the augmented content would be different. Exiting a high cognitive load state is shown on the pupil dilation pattern 900 (Fig. 9) as a steep negative gradient 940 in pupil dilation, indicating that user exits the high cognitive load state at a rapid speed. The change in cognitive load direction (exiting) and speed would affect the Change type 850 and Time to change 840 instructions in the rule table 800. In an ‘exiting high cognitive load’ scenario the Time to change 840 would be uniform and rapid (for example less than 1 second across all visual zones). In an ‘exiting high cognitive load’ scenario the Change Type 850 would brighten the displayed augmented content instead of dim. Such changes would assist the user in less targeted, high level browsing type tasks.
[00109] Referring to Fig. 6, the zones 620, 630 and 640 are based on the focal zones of the user. In other arrangements the zones 620, 630 and 640 are determined according to work zones on a desk. The closest zone, 620 is the primary work zone reflecting an area where primary work tasks generally occur. The secondary work zone 630 is where documents and objects loosely related to the primary task are generally located. The tertiary work zone 640 is where documents that are not relevant to the current task are generally stored. In some arrangements the size of the work zones 620, 630 and 640 are modified by the posture of the user and based upon the determined variation of cognitive load. For example, when the user is leaning back from the desk the work zones are determined as above by the work task zones. When the user leans in toward the desk, indicating the user is deeply concentrating on a task, the size of the primary work zone 620 typically shrinks to include only those items or objects that are the focus of the current task as indicated by a sustained user gaze location. In this instance the primary work zone shrinks to the exact shape of the current objects/documents, any augmented content associated with the current objects/documents and those objects/documents the current objects/documents overlap. The secondary and tertiary zones may not change.
[00110] In the arrangements described above, the visual attribute changed to reduce visual distraction was brightness. In other arrangements the visual attributes changed to reduce visual distraction are luminance contrast and animation or video speed. Figs. 7A and 7B illustrate how a change in pupil dilation affects the visual display attributes of augmented content 710a, 720a, 710b, 720b. When a user is in a low cognitive load state indicated by a non-dilated pupil 700a, as shown in Fig. 7A, the augmented content 710a and 720a in both visual zones 760 and 770 and 630, is presented similarly. The content 710a and 710b is displayed having attributes of high contrast and animation. The visual zones 760 and 770 relate to the zones 620 and 630 of Fig. 6, respectively.
[00111] The user enters a deeply concentrating state as indicated by a dilated pupil 700b in Fig. 7B. In this instance of Fig. 7B, the augmented content display attributes are adjusted depending on the visual zones 760 and 770 in which the content is located. In the example of Fig. 7B, the luminance contrast of augmented content 710b in the mid-periphery visual zone 770 is reduced and the speed of the animation of the augmented content 710B is slowed to a stop. Such changes reduce the visual distraction caused by the content 710b as the content 710b is determined lower in importance being relatively distant from a target object 750. In contrast, the display attributes of augmented content 720b in the closer visual zone 760 are not changed as the content 720b is closer to the target object 750 and assumed to be relevant to the user.
[00112] In the arrangements described above in relation to Fig. 6, the augmented digital content 650, 660, and 670 is adjusted to reduce visual distraction when the user is in a high cognitive load state. However, physical objects in the environment surrounding the user may also be distracting but are not modified. In some implementation, augmented content can be used to diminish or obscure objects in the physical environment. Figs. 10A and 10B show how augmented content can be used or modified to diminish visual appearance of physical objects in the surrounding environment to the user.
[00113] In Fig, 10A the user is in a low cognitive load state indicated by a pupil 1000a which is not dilated. Fig. 10A shows a target object 1050 augmented with content 1020a, and a person 1010a moving around in the surrounding environment in the user’s field of vision. In the scenario of Fig. 10A two visual zones are determined. The closest zone is a zone 1030a and is defined by an edge of a work table. A second more distant zone 1040a is defined by an extent of a room in which the user is positioned. In Fig. 10A the person 1010a is visible to the user.
[00114] The user enters a higher cognitive load state and the display of augmented content is shown in Fig. 10B. The user’s dilated pupil is shown by a diluted pupil 1000b. The moving object 1010a in the more distant zone 1040a is obscured by new augmented content 1010b. The augmented content 1010b has an effect of making the person 1010a of the surrounding environment less visually distracting to the user. In this example of Figs. 10A and 10B a higher cognitive load state does not change display of the augmented content 1020a in the close zone 1030a. Other methods of reducing the visual distraction of moving objects in the user’s peripheral vision include to reduce contrast of the objects, cover the objects with an image replicating the background environment, cover the objects with a neutral toned graphic to block the objects out, or generate a graphic with slow constant movement to obscure the more random movement.
[00115] The arrangements described are applicable to the computer and data processing industries and particularly for the augmented reality industries.
[00116] The arrangements described provides means of adjusting display of augmented reality content based upon a concentration level (indicated by cognitive load) of a user. By selecting zones and adjusting display attributes or characteristics of the zone based on variation in cognitive load, the arrangements described provide a means of reducing distraction to the user without the user being overly aware of the adjustment.
[00117] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.

Claims (19)

1. A method of modifying display of augmented reality content, the method comprising: determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
2. The method according to claim 1, wherein the sensed biometric parameter relates to dilation of pupils of the user.
3. The method according to claim 1, wherein determining the variation in cognitive load comprises determining that the user is transitioning form a relatively low cognitive load to a relatively high cognitive load.
4. The method according to claim 1, wherein determining the variation in cognitive load comprises determining that the user is transitioning form a relatively high cognitive load to a relatively low cognitive load.
5. The method according to claim 1, wherein determining the variation in cognitive load comprises determining that the user has a fluctuating level of cognitive load.
6. The method according to claim 1, wherein the augmented reality content is at least one of text, animation, video, graphics and augmented content diminishing a surrounding environment from the field of view of the user.
7. The method according to claim 1, wherein a speed of implementation of the modification of the display characteristic is adjusted based upon the determined variation of cognitive load.
8. The method according to claim 1, wherein a size of the selected at least one visual zone is changed according to the determined variation of cognitive load.
9. The method according to claim 1, further comprising determining a plurality of visual zones of the user, the at least one visual zone being selected from the determined plurality of zones.
10. The method according to claim 9, wherein the plurality of visual zones is based upon one of a distance from a focal point of the user, an area of a workspace, a proximity to a target object, and a distance from the user.
11. The method according to claim 1, wherein modifying the display characteristic comprises moving a location of the augmented reality content from one visual zone of the user to another visual zone of the user.
12. The method according to claim 1, wherein the augmented reality content is text and modification of the display characteristics comprises generating and displaying a summary of the text.
13. The method according to claim 1, wherein modifying display of the augmented reality content comprises diminishing visual prominence of a portion of the surrounding environment in the field of view of the user.
14. The method according to claim 1, wherein modifying the display characteristic of the augmented reality content comprises modifying at least one of brightness, colour, animation, contrast, luminance, transparency, location, location, size, degree of fading and angle of the augmented reality content.
15. The method according to claim 1, wherein the augmented reality content is video content and modifying the display characteristics comprises modifying a playback speed of the video content.
16. The method according to claim 1, wherein the modification of the daily characteristic of the augmented reality content in one selected visual zone of the user differs with respect to modification of the daily characteristic of the augmented reality content in another selected visual zone of the user.
17. A non-transitory computer readable storage medium having a computer program stored thereon for modifying display of augmented reality content, comprising: code for determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
18. An augmented reality system, configured to: display augmented reality content to a user of the augmented reality system; sense a biometric parameter of the user; determine a variation of cognitive load of a user viewing the augmented reality content according to the sensed biometric parameter of the user; select at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modify, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
19. An apparatus, comprising: a processor, a sensor for sensing a biometric parameter of a user of the apparatus; and a memory, the memory having instructions thereon executable by the processor to: display augmented reality content to the user, and modify display of augmented reality content in an augmented reality environment, by: determining a variation of cognitive load of a user viewing the augmented reality content according to a sensed biometric parameter of the user; selecting at least one visual zone of the user based on the determined variation of cognitive load of the user, the at least one visual zone being related to a field of view of the user, each of the at least one visual zones having a different visual acuity to one another; and modifying, based on the determined variation in cognitive load, a display characteristic of the augmented reality content within the selected at least one visual zone to adjust a degree of visual distraction of the augmented reality content to the user.
AU2016201929A 2016-03-29 2016-03-29 System and method for modifying display of augmented reality content Abandoned AU2016201929A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2016201929A AU2016201929A1 (en) 2016-03-29 2016-03-29 System and method for modifying display of augmented reality content

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2016201929A AU2016201929A1 (en) 2016-03-29 2016-03-29 System and method for modifying display of augmented reality content

Publications (1)

Publication Number Publication Date
AU2016201929A1 true AU2016201929A1 (en) 2017-10-19

Family

ID=60051240

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2016201929A Abandoned AU2016201929A1 (en) 2016-03-29 2016-03-29 System and method for modifying display of augmented reality content

Country Status (1)

Country Link
AU (1) AU2016201929A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10764656B2 (en) 2019-01-04 2020-09-01 International Business Machines Corporation Agglomerated video highlights with custom speckling
CN112749558A (en) * 2020-09-03 2021-05-04 腾讯科技(深圳)有限公司 Target content acquisition method and device, computer equipment and storage medium
WO2022049450A1 (en) * 2020-09-03 2022-03-10 International Business Machines Corporation Iterative memory mapping operations in smart lens/augmented glasses

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10764656B2 (en) 2019-01-04 2020-09-01 International Business Machines Corporation Agglomerated video highlights with custom speckling
CN112749558A (en) * 2020-09-03 2021-05-04 腾讯科技(深圳)有限公司 Target content acquisition method and device, computer equipment and storage medium
WO2022049450A1 (en) * 2020-09-03 2022-03-10 International Business Machines Corporation Iterative memory mapping operations in smart lens/augmented glasses
US11620855B2 (en) 2020-09-03 2023-04-04 International Business Machines Corporation Iterative memory mapping operations in smart lens/augmented glasses
GB2612250A (en) * 2020-09-03 2023-04-26 Ibm Iterative memory mapping operations in smart lens/augmented glasses
CN112749558B (en) * 2020-09-03 2023-11-24 腾讯科技(深圳)有限公司 Target content acquisition method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
Santini et al. PuRe: Robust pupil detection for real-time pervasive eye tracking
US10366778B2 (en) Method and device for processing content based on bio-signals
KR102627452B1 (en) Multi-mode eye tracking
US9996155B2 (en) Manipulation of virtual object in augmented reality via thought
US8482562B2 (en) Vision-based computer control
Cernea et al. A survey of technologies on the rise for emotion-enhanced interaction
US20210349536A1 (en) Biofeedback method of modulating digital content to invoke greater pupil radius response
US9585581B1 (en) Real-time biometric detection of oscillatory phenomena and voltage events
JP2014021986A (en) Content provision method and display device using the same
BR112017003946B1 (en) A COMPUTING METHOD AND DEVICE FOR ASSISTING A PARTICULAR USER TO USE A USER INTERFACE, AND, MACHINE READABLE NON-TRANSITORY MEDIUM
CN110968189A (en) Pupil modulation as a cognitive control signal
KR20120060978A (en) Method and Apparatus for 3D Human-Computer Interaction based on Eye Tracking
EP4035142A1 (en) Creation of optimal working, learning, and resting environments on electronic devices
AU2016201929A1 (en) System and method for modifying display of augmented reality content
US9886621B2 (en) Segmenting scenes into sematic components using neurological readings
Ha et al. Novel hybrid brain-computer interface for virtual reality applications using steady-state visual-evoked potential-based brain–computer interface and electrooculogram-based eye tracking for increased information transfer rate
CN114698389A (en) Brain-computer interface
Meso et al. Looking for symmetry: Fixational eye movements are biased by image mirror symmetry
Madhusanka et al. Biofeedback method for human–computer interaction to improve elder caring: Eye-gaze tracking
WO2022232414A1 (en) Methods, systems, and related aspects for determining a cognitive load of a sensorized device user
Herdman Functional communication within a perceptual network processing letters and pseudoletters
KR102306111B1 (en) Method and apparatus for eog-based eye tracking protocol using baseline drift removal algorithm for long-term eye movement detection
Peters et al. Modelling user attention for human-agent interaction
Wong et al. Automatic pupillary light reflex detection in eyewear computing
Das et al. Detecting inner emotions from video based heart rate sensing

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application