WO2012035119A1 - Memory aid - Google Patents

Memory aid Download PDF

Info

Publication number
WO2012035119A1
WO2012035119A1 PCT/EP2011/066049 EP2011066049W WO2012035119A1 WO 2012035119 A1 WO2012035119 A1 WO 2012035119A1 EP 2011066049 W EP2011066049 W EP 2011066049W WO 2012035119 A1 WO2012035119 A1 WO 2012035119A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
information
image
capture
rules
Prior art date
Application number
PCT/EP2011/066049
Other languages
French (fr)
Inventor
Geoffrey Victor Merrett
Dirk De Jager
Bashir Mohammed Ali Al-Hashimi
Wendy Hall
Nigel Richard Shadbolt
Original Assignee
University Of Southampton
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Southampton filed Critical University Of Southampton
Publication of WO2012035119A1 publication Critical patent/WO2012035119A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/40ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management of medical equipment or devices, e.g. scheduling maintenance or upgrades
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present invention relates to a method of aiding memory, and more specifically to a method of aiding the memory of a user of a mobile device, to respective methods of operating a processor of each of a mobile device and a device for capturing images, to respective computer program products for causing an apparatus to perform the respective methods, to an apparatus comprising a processor and memory including computer program code for one or more programs, and to a device for capturing images.
  • the invention is usable to support sufferers of early stage dementia, and normal age related memory loss.
  • Memos a system developed to assist prospective memory using a two tier, three component model is presented in A. I. T. Thone-Otto and K. Walther, "How to design an electronic memory aid for brain-injured patients: Considerations on the basis of a model of prospective memory," International Journal of Psychology, vol. 38, pp. 236-236, 2003.
  • the Memos system addresses a model first presented in J. Ellis, "Prospective Memory or the Realization of Delayed Intentions: A Conceptual Framework for Research," Prospective memory: theory and applications, 1996, which describes five stages of intention: encoding, delay, performance retrieval, execution, and evaluation.
  • the Memos system uses a Personal Memory Assistant (PMA) to remind a Brain-Injury patient of events and tasks at an appropriate time and allows for guidance through the activity to complete the intention.
  • PMA Personal Memory Assistant
  • HERMES Pervasive Computing and Cognitive Training for Ageing Well
  • Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living, pp. 756-763, 2009 discloses a system called HERMES.
  • the HERMES architecture describes a set of environmental sensors, mcluding indoor and outdoor cameras, to a tier of analysis engines through a system controller and a middleware layer (Chilix).
  • Two separate databases store information. The first is a relational database storing the application data, while the second is a database to infer new knowledge, validate meta-data and apply rules using a rule engine.
  • Electronic memory assistive devices such as the Microsoft SenseCam, as discussed in S. Hodges, et al, "SenseCam: A Retrospective Memory Aid,” ed, 2006, pp. 177-193, have been developed to assist retrospective autobiographical memory through a retrospective view of an accumulation of images taken from a patient's point of view in a process called "lifelogging". These images are then viewed in a retrospective session with a carer - a technique similar to a user diarising all their past daily events, which is a common practise used currently by carers of dementia patients.
  • an improved memory aid there is a need for an improved memory aid.
  • an improved memory aid that is capable of providing feedback to a user in assisting them with their current task.
  • a memory aid that is capable of providing complex image and data analysis and processing, yet which is not too cumbersome for a user to use.
  • a memory aid that overcomes or obviates the problems described above.
  • the present invention aims to provide an improved memory aid.
  • the present invention aims to provide an improved memory aid that is that is capable of providing feedback to a user in assisting them with a task they are currently performing.
  • the present invention also aims to provide a memory aid that is capable of providing complex image and data analysis and processing.
  • the present invention also aims to provide a memory aid that is not to arduous or cumbersome for a user to use.
  • a first aspect of the present invention provides a method of operating a processor of a mobile device, the method comprising: receiving information comprising image data defining an image; causing contextual data and data defining the image to be uploaded to a remote system; and communicating to a user of the mobile device data associated with their current context, on the basis of data received from the remote system.
  • a second aspect of the present invention provides a computer program product for causing an apparatus to perform the method of the first aspect.
  • a third aspect of the present invention provides a method of aiding the memory of a user of a mobile device, comprising: receiving, from a mobile device, information comprising contextual data and image data defining an image; processing the received information; and sending, to the mobile device, data associated with the user's current context on the basis of a result of the processing.
  • a fourth aspect of the present invention provides a method, comprising: receiving, from a mobile device, information comprising (a) image data defining an image captured by a camera of a device and one or more of (b) sensor data from one or more sensors of the device, (c) data indicating one or more rules of a set of capture rules which triggered capture of the image, and (d) contextual data; evaluating whether the set of capture rules needs updating by processing the received information; and sending, to the mobile device, rule update information when it is determined that the set of rules needs updating.
  • a fifth aspect of the present invention provides a computer program product for causing an apparatus to perform the method of one of the third aspect and the fourth aspect.
  • a sixth aspect of the present invention provides a method of aiding the memory of a user of a mobile device, the method comprising: receiving at the mobile device, from a peripheral device, information comprising image data defining an image; causing information comprising contextual data and data defining the image to be uploaded from the mobile device to a remote system; comparing, at the remote system, the information received to internal and/or external data sets; sending, from the remote system to the mobile device, data associated with the user's current context on the basis of a result of the comparing; and communicating to a user of the mobile device data associated with their current context, on the basis of the data from the remote system.
  • a seventh aspect of the present invention provides a method of operating a processor of a device that comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor, wherein the method comprises: compiling information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and causing the transmitter to send the information towards a second device.
  • An eighth aspect of the present invention provides a method of operating a processor of a device for capturing images, which device comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor, wherein the method comprises: receiving data from a second device via the receiver; and changing the set of capture rules stored in the memory on the basis of the data received.
  • a ninth aspect of the present invention provides a computer program product for causing an apparatus to perform the method of one of the seventh aspect and the eighth aspect.
  • a tenth aspect of the present invention provides an apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform: compiling information comprising (a) image data defining an image captured by a camera and one or both of (b) sensor data from one or more sensors and (c) data indicating one or more rules of a set of capture rules which triggered the capture, which set of capture rules define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and causing a transmitter to send the information.
  • An eleventh aspect of the present invention provides an apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform: receiving data; and changing a set of capture rules stored in memory on the basis of the data received, which set of capture rules define when a camera is to be operated to capture an image on the basis of sensor data from one or more sensors.
  • a twelfth aspect of the present invention provides a device for capturing images, the device comprising a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to compile information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and to cause the transmitter to send the information towards a second device.
  • a thirteenth aspect of the present invention provides a device comprising a camera, one or more sensors, a receiver, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to change the set of capture rules stored in the memory on the basis of data received from a second device via the receiver.
  • Figure 1 is a schematic diagram showing a system embodying a first embodiment of the present invention
  • Figure 2 is a diagram showing the transfer of messages between tiers of the system of Figure l;
  • Figure 3 is a schematic diagram showing components of a wearable device that forms part of the system of Figure 1 and that is in accordance with the first embodiment of the present invention
  • Figure 4 is a perspective view showing a prototype of the wearable device of Figure 3;
  • Figure 5 is a flow diagram illustrating the process of compiling an information shard, as performed at the wearable device of Figure 3 in accordance with the first embodiment of the present invention
  • Figure 6 is a schematic diagram showing the structure of a rule database used by the wearable device of Figure 3;
  • Figure 7 is a schematic diagram showing the different information components of an information shard, as used in the first embodiment of the present invention.
  • Figure 8 is a schematic diagram showing the functionality of an internet service application, according to the first embodiment of the present invention.
  • the first embodiment provides a system to facilitate active recall of normal and routine daily activities, people, places and objects, and cue a user with associated data related to these.
  • Prospective Memory known as 'remembering to remember' (as discussed in E. Winograd, "Some observations on prospective remembering," Practical aspects of memory: Current research and issues, vol. 1, pp. 348-353, 1988) is the act of remembering an intention or action.
  • prospective memory support as the action of providing a user with memory cues beyond a traditional alarm.
  • the system of the first embodiment provides contextual cues (which have been processed online), to assist the user with the intention of remembering future events.
  • An example message of the system would be: ' You are talking to Miss Smith, whom you last spoke to on Wednesday afternoon at work '. In the first embodiment, this is done through a three-tier assistive technological system to sense, process and assist the user's real-time experiences.
  • the three tier system architecture of which each tier fulfils a unique role based on its abilities and constraints comprises of three components: an external periphery sensing device (herein called the DejaView Device), an Internet connected mobile telephone handset application, and an online Internet service application which is in turn connected to the users online presence and social networks.
  • the system assists in active recall by analysing what the user is currently experiencing or dealing with and feeding back relevant information to the user. This automatic cueing reminds the user of features within the current environment, data associated with the current context, and information related to the current context from the user's social networks and online Internet presence.
  • the system is intended to support sufferers of early stage dementia, and normal age related memory loss. This pervasive, context-aware sensing, processing and automated user feedback system is called "DejaView" herein.
  • the conceptual design for this system is a portable, unobtrusive and continuous use memory aid.
  • the system attempts to remind users of important factors in their current environment classified into cues relating to people, places, objects and actions.
  • the concept of the system has a number of significant advantages over existing devices, representing a step change in the capability of technology-based memory aids.
  • a low-power, wearable, intelligent device which autonomously captures images and sensed information to efficiently cue autobiographical memories
  • a web-enabled, wireless system that integrates with the user's mobile handset to add contextual information provides feedback, and link the DejaView device to the Internet
  • c) the automatic annotation and analysis of images using multiple distributed databases, allowing the system to effectively present the user with the contextual cues of relevant images, and allow further effective querying from a knowledge repository
  • Mr Jefferies Having fetched the sugar and giving it to Mrs Jones, he asks her how Mr Jones is doing. They converse for a short while, and afterwards he heads back indoors and sits down on his chair. As he looks down at his mobile handset, he sees that he was making a cup of tea, and heads back into the kitchen to continue making his cup of tea. Having had his breakfast, Mr Jefferies starts walking to the comer shop for his weekly groceries. As he leaves the house, and starts walking away from his car, his phone vibrates, and a message is displayed on his phone reminding him that he has a doctor 's appointment in twenty minutes, which he will be late for if does not drive there soon. Mr Jefferies decides to postpone his shopping and go to the Doctor instead.
  • Data from the wearable device is processed and classified on an energy efficient, lightweight processor which assesses the value of the information.
  • valuable information is transmitted to the mobile handset as classified by a set of rules called the 'capture' rules.
  • the mobile handset then further classifies and processes the information, appends further contextual data, such as data resulting from a feature detection process or data (e.g. GPS data) indicating the geographic location of the DejaView device and/or the mobile handset, and uploads relevant information to an online internet application for further data analysis, processing, storage and offline review.
  • further contextual data such as data resulting from a feature detection process or data (e.g. GPS data) indicating the geographic location of the DejaView device and/or the mobile handset, and uploads relevant information to an online internet application for further data analysis, processing, storage and offline review.
  • the information online is also compared to the users online Internet presence (Activities such as: Checking their online calendar to ensure that they are not far from a pre-arranged appointment; checking their social networks to see whether they are in contact with someone currently; comparing images of people they are currently talking to with images of people they know, such as by comparing the captured image with images previously captured and stored by the remote system).
  • Information extracted and evaluated by this process is then compared to a table specifying rules of which to contact the user (Notification rules), and then the user is then informed as required by these rules. Updated and processed information is then also transmitted back to the handset for communication to the user, along with rule updates to the periphery device and handset.
  • the mobile handset is used as the primary interface to the whole system, although a secondary interface is also used to address the periodic system feedback and memory planning. Using the mobile handset as the primary user interface allows a high level of system mobility and interoperability.
  • the data communicated using this system is encapsulated into three different objects describing their function.
  • a data object carrying information from the sensor or handset to the online service application is known as an Information 'Shard'.
  • Information objects fed back from the Online Internet Application to the Mobile Handset and the periphery sensing device are known as the 'review rule update' and 'capture rule update' respectively.
  • Data transmitted to the User, or other carer, or interested party is a Notification.
  • Each of the three components of the system has specific role, with clear inputs and outputs to perform its function.
  • the role of the DejaView Device is to sense data autonomously and energy efficiently, and to alert the other system components of the important information.
  • the mobile handset has two roles: a) to communicate the important information to the Online Internet Application, and b) to provide an interface with which the user can interact.
  • the Online Internet application is the data processing centre. It provides all the tools necessary to analyse images and data intensively and rapidly, in order to provide the user with a notification. It also processes the information supplied by the other components to evaluate whether rules need updating. Its final function is to provide a Knowledge repository where all the data is stored for further offline analysis, manual annotation and periodic review/planning.
  • the main components are a low-power microcontroller (MCU), a Complex Programmable Logic Device (CPLD), a CMOS camera sensor, a low power static memory (SRAM), a set of low-power sensors, a three-colour led indicator, and a Bluetooth communications radio.
  • the Bluetooth communications radio may be replaced by some other, preferably wireless, transmitter and receiver for communication with the mobile handset.
  • the MCU turns off the transmitter/receiver when communication with the mobile handset is not required, in order to conserve energy.
  • the MCU turns off the camera a certain set period of time after an image has been captured, in order to conserve energy.
  • the DejaView Device is designed to be worn as close as possible to the point of view of the user.
  • a prototype of the DejaView Device can be seen in Figure 4.
  • the device may be powered by a lithium polymer battery, charged through a connector such as a micro-USB connector.
  • a lithium polymer battery charged through a connector such as a micro-USB connector.
  • a 550mAh battery and a 800mAh battery has been found to be suitable.
  • the device weighs no more than 80 grams. In some embodiments it weights no more than 70 grams. More preferably it weighs no more than 60 grams, and more preferably still it weighs no more than 50 grams.
  • the set of low-power sensors in this embodiment comprises a microphone, a passive infrared sensor (PIR), a light sensor, an accelerometer, and a compass. In other embodiments, one or more of these sensors may be omitted and/or other sensors may be provided. Sensor data from the one or more low-power sensors indicates characteristic(s) of the environment within which the sensors are disposed, and thus can be considered contextual data, i.e. data representative of the user's current context.
  • the DejaView Device software is built upon a version of the Unified Framework, discussed in G. V. Merrett, et al. (2006).
  • the low-power sensors may constantly monitor the user's environment and output data, i.e. sensor data.
  • the application level software running on the Unified Model processes the sensor data against an in-memory table of rules to assess the value of the sensor data. When the sensed data is valued to be greater than the in memory rules, the higher power camera sensor is turned on to capture an image.
  • the processor (MCU) compares the output of the one or more low-power sensors present in the device and, when sensor data from one or more of the low-power sensors meets certain criteria set out in one or more of the capture rules, such as having a value that exceeds a certain, predetermined threshold value, the processor (MCU) triggers the camera to capture a photo. This allows the camera, a relatively high-power device, to be turned off until it is determined by the processor that an image should be captured, thus saving power.
  • Figure 5 is self-explanatory, for completeness it illustrates all of (a) image data defining the captured image, (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture being compiled into the Information Shard for sending to the mobile handset.
  • the Information Shard can be considered to describe a self-contained event, based on the low-power sensors' data and the image data.
  • the Information Shard may permit the application running at the mobile handset and/or the application ranning online to determine which rule(s) were used to trigger the capture and the sensor data that triggered the effect of the rule(s).
  • one or the other of (b) and (c) may be omitted from the Information Shard, such that the Information Shard comprises only (a) and (b) or (a) and (c).
  • the sensor data compiled into the Information Shard comprises the sensor data output from all of the low-power sensors at the time of image capture, or at least at the time the camera was triggered to capture the image.
  • all of the rules of the set of capture rules are compiled into the Information Shard, so as to indicate the state of the rule set at the time the image was captured.
  • the wearable device would preferably comprise a sensor, such as a global positioning system (GPS) sensor, for determining the geographic location of the device, and the further contextual data would comprise GPS data.
  • GPS global positioning system
  • the compiling of an Information Shard comprises appending one or both of the sensor data and the data indicating one or more rules as metatags in an image file that defines the captured image.
  • the application continuously captures images and sensor data according to the 'capture' rule database until the rules are changed. Capture rules can be adaptively changed through feedback from the handset and the online web server.
  • the Rule database is structured as can be seen in Figure 6.
  • Each rule has an identifier ("ID") of the rule in the rule set, an indication of the low-power sensor(s)'s the output value(s) of which the rule applies to (“Sensor”), an algorithm or formula into which the output value(s) of the sensor(s) is input (“Rule”) for comparison, an indication of an action to be performed if the analysis results in a positive outcome of the comparison (“Result if true”), an indication of an action to be performed if the analysis results in a negative outcome of the comparison (“Result if false”), and an indication of rule lifetime (“Lifetime”) that dictates when the rule is in effect, as discussed in more detail below.
  • a rule may be in effect at all times, or only at certain time(s).
  • Capture rules can contain variables, and a small set of pre-defined functions such as sum, magnitude, and time-delayed values.
  • the time-delayed values can be viewed as 'taps' in a Finite Impulse Response (FIR) filter to create reasonably complex filters in the rule database.
  • FIR Finite Impulse Response
  • the pre-defined functions are simple, often used functions which allow rules to be created using simple structures.
  • a fixed number of variables, due to the limitation of memory size, can be used within the rules database to allow more complex sets of rules based on a set of individual rules.
  • the rule lifetime allows rules to last for either a specified short period of time, or a longer more permanent time period. For example, a rule may expire after a set period of time and be deleted from the rule set. Alternatively, a rule may be time specific such that it causes one effect during a first period of time, and a different effect during a second period of time. This allows the Internet Application to enable the DejaView device to tune its sensitivity to capture more/less images when it regards certain time periods to be more relevant to the user to capture images as opposed to other periods of time which it regards the users activity to be less important to capture images.
  • the processor of the DejaView device changes the set of capture rules on the basis of a 'capture rule update' received from the mobile handset.
  • the processor may change the set of rules by adding a new rule, deleting an existing rule, or amending an existing rule.
  • the processor may set a rule that recites a threshold value or a range of values against which a value of sensor data from one or more of the low-power sensors is compared.
  • the set rule may dictate a certain action (e.g. capture image) if the value of the sensor data matches or exceeds the threshold value or falls within the range, respectively, (i.e. "result if true") and dictate a section action (e.g. maintain off state of camera) if the value of the sensor data is less than the threshold value or outside of the range, respectively (i.e. "result if false").
  • the application comprises of two main components: a component allowing the user to simply and easily interface with the pervasive system; and a communication component allowing the information gathered by the DejaView Device to be further processed, classified and forwarded on to the Internet Service application.
  • the Interface component provides the user with a system to value previously captured information, and change current rules using a simple interface. In this way, a capture rule update may be generated and sent to the DejaView device to cause the processor of the device to change the set of capture rules.
  • the interface component also acts as an interface to the Online Internet Service Application allowing the user to link their online presence to the device and handset.
  • Using the mobile handset as the user interface to the DejaView system removes the requirement for the DejaView device to have its own interface, which again reduces the size, weight and power demand of the wearable device, and also simplifies the user experience.
  • Communication from the DejaView Device to the Internet Application is performed through the mobile handset, and thus runs as a separate process on the handset.
  • the process waits for data to arrive from the DejaView Device.
  • the application analyses the incoming data shard according to a set of rules called the 'refine' rules database.
  • Refine Rules are written in a similar format to the 'capture' rules, but comprise of more complex functions and rule types allowing for basic image processing functions such as edge detection, feature detection and basic face detection (preferably not face recognition).
  • Features detected could include an object, a place, or a person.
  • the shard is then pushed on to the Online Internet Service Application for final processing, preferably with contextual data generated at the mobile handset appended to the information received at the mobile handset from the DejaView device, such that the Information Shard uploaded from the handset comprises augmented information, as indicated in Figure 2.
  • contextual data may be data resulting from the feature detection process that indicates a detected feature, and/or data (e.g. GPS data) indicating the geographic location of the DejaView device and/or the mobile handset.
  • Figure 7 details the different information components of an information shard.
  • the information contained in the shard is intended to describe a self-contained experience based on the sensors knowledge. From leaving the handset, an information shard should contain the important information (Image Data); why the information is important (Detected Features); the information which it used to make the decision (Sensor Data) and the rules it used to decide on the decision (Rules used).
  • Image Data the important information
  • Detected Features the information which it used to make the decision
  • Rule used the rules it used to decide on the decision
  • the Information Shard sent to the mobile handset from the wearable device may exclude one or other of the sensor data and the rules used. In such cases, the excluded data may also be excluded from the information shard sent from the mobile handset to the remote system.
  • the information sent in this shard might appear to be redundant, it is required to ensure the further stages in the pervasive system can understand what rules were used to trigger the capture, and thus make changes to those rules should it decide to do so.
  • the mobile handset application may receive image data defining plural images, preferably in plural received Information Shards.
  • the application may analyse the images defined by the image data and discard some of the image data, for example if the image data indicates that the image captured is unclear, perhaps by way of the camera sensor being at least partially obscured at the time of capture.
  • the mobile handset application would then cause data defining only some of the plural images to be uploaded.
  • the Online Internet Service ( Figure 8) provides a mechanism for long-term backed up data storage, powerful offline data analysis and annotation, a gateway to external online services, such as face recognition services, a service to update the rules databases on both the handset and DejaView Device, a Knowledge repository for periodic review, and a Notification Engine which describes who and how the user, their carer or any other interested party should be contacted.
  • the Internet service application waits for new information shards to be received from the mobile handset, and on reception processes the data on a rule set for the user called the 'compare' rules.
  • the compare rules which supersede the capture and refine rules, describe the required comparison of the current information shard to internal and external data sets, as well as mnning intensive processing on the image to extract information which would prove useful to the user in the future (Such as classification of the objects within that environment). Compare rules can contain any subset of capture or refine rules.
  • the Annotation Engine runs the set of compare rule commands through the data analysis engine, and stores the results in the Knowledge repository. Once stored in the knowledge repository, the results of the data analysis are then forwarded onto the Notification Engine, which parses the annotations into user friendly messages and informs the user (and anyone specified in the notification list) using the protocol specified in the notification list.
  • the Notification engine On receipt of compare results, the Notification engine runs the annotations and contextual supported data through a notification rule set, describing who the message should go to and how the message should be presented to the user. This information is compiled into a notification which aids the user with their current activity and assists in memory tasks.
  • the online application receives data of an information shard from the mobile handset, analyses the received data to extract information and compare the data with other data, such as image data defining images accessible by the online application (e.g. images stored by a social network service, images previously received from the mobile handset, and/or stock images stored online or elsewhere and accessible by the online application), and provides annotations to the data of the information shard.
  • the online application may combine data from the information shard with other data to which the online application has access, such as data from one or more of an online calendar, a knowledge system, a social networking account, and data defining other images, such as data of an online image gallery, and/or the online application may change data from the information shard on the basis of such other data.
  • the online application might determine that a certain image defined in the received data comprises an image of a person the user knows, an image of a known article or object such as a kettle, and/or an image of a known place such as the user's front door or kitchen.
  • an annotation is provided to the data to indicate at least some of the content of the data.
  • the annotations may, for example, include an indication of the name or other information of a person identified in the image data of the shard, an indication of a place, or an indication of an object.
  • the annotations are then used by the notification engine to generate data of one or more notifications for sending to the mobile handset.
  • the data might include the name or other identifier of a person, place, object or action.
  • Data defining a notification is then sent to the mobile handset, which provides the user of the handset with a notification according to the data it receives from the online application.
  • the notification(s) are sent according to the notification rule set, which dictates the destination(s) and/or protocol or format of the notification.
  • the online application will determine that the mobile handset is to be a destination of the notification, on the basis of data indicating an address or identity of the mobile handset comprised in the Information Shard, or in session data accessible to the online application as a result of a session being present between the mobile application and the online application, or on the basis of some other data indicating an address or identity of the mobile handset.
  • the notification may be provided to the user by the mobile handset by way of the handset displaying a graphic image, a photograph, text, or any combination of these on a display of the handset.
  • the notification may be provided to the user by the mobile handset by way of the handset or wireless/wired headset emitting an audible sound, such as spoken word(s), to the user.
  • the data communicated to the user of the mobile device is associated with their current context (i.e. the context or environment they currently find themselves in), such as to provide the user with feedback that can assist them with a task they are currently, or were very recently, performing.
  • the online application by processing the data comprised in the Information Shard received at the online application, the online application is able to evaluate whether the set of capture rules needs updating. For example, the online application might determine that the set of capture rules needs updating when it determines that images are being captured by the DejaView device at times of the day when there is little information of interest to be captured, or when it determines that the level of noise in the user's current environment is such that the camera of the DejaView device needs to operate only when the noise exceeds a different threshold. If it is determined that the capture rules need updating, the online application sends rule update information to the DejaView device via the mobile handset.
  • the distributed architecture of the system maintains computational power whilst also minimising energy requirements of the wearable and portable parts of the system.
  • the system facilitated by the three-tier architecture, analyses context with computationally- intensive processing, performed primarily by the online application and the mobile handset, while observing strict energy requirements for the wearable device.
  • the wearable device is able to draw a relatively low current, for example 38.5mA, from its onboard power source.
  • This permits, in turn, the wearable device to have only a small and lightweight power source, resulting in a device with small overall dimensions and weight.
  • the user may still be provided with a memory aid that is able to carry out data-hungry processes, such as face recognition, and may receive notifications in real-time, or near real-time, that are relevant to their current context (i.e. environment). This cueing reminds the user of features within their current environment, data associated with their current context, and information related to their current context.
  • a device preferably a mobile device, and more preferably a mobile telecommunication device, is provided that combines the above-described features and operations of the DejaView device and the mobile handset.
  • the device comprises one or more of the above- described low-power sensors and a camera, such as a CMOS camera sensor.
  • Memory at the device stores a set of capture rules that define when the camera of the device is to be operated to capture an image, on the basis of sensor data from the one or more sensors of the device.
  • a processor of the device receives from the camera image data defining the captured image.
  • the processor compiles information comprising image data defining an image captured by the camera and one or both of sensor data from one or more sensors and data indicating one or more rules of the set of capture rules which triggered the capture, and causes the compiled information to be transmitted to the remote system. That is, the processor causes a transmitter of the device to send the information towards the remote system.
  • the processor of the device causes further contextual data to be uploaded to the remote system, substantially as discussed above.
  • the processor may process the image data to detect features in the image, for example faces, and an indication of the features detected may be uploaded to the remote system as contextual data.
  • the device comprises a sensor, such as a global positioning system (GPS) sensor, for determining the geographic location of the device
  • the further contextual data uploaded to the remote system may comprise GPS data.
  • the device of this alternative embodiment receives notifications from the remote system and provides a user of the device with a notification according to the data it receives from the online application, as discussed above.
  • the notification may be provided to the user visibly and/or audibly, again as discussed above.
  • the data communicated to the user of the device is associated with their current context (i.e. the context or environment they currently find themselves in), such as to provide the user with feedback that can assist them with a task they are currently, or were very recently, performing.
  • the device of this alternative embodiment may receiver capture rule updates from the remote system, as the DejaView device does in the first embodiment described above, which cause the processor of the device to update the set of capture rules stored in the memory of the device.
  • this alternative embodiment of the memory aid of the present invention maintains computational power whilst also minimising energy requirements of the user's device, i.e. the preferably portable, and more preferably wearable, part of the system.
  • the system analyses context with computationally-intensive processing, performed primarily by the online application, while observing strict energy requirements for the device.
  • the device is able to draw a relatively low current, which permits the device to have only a small and lightweight power source, resulting in a device with small overall dimensions and weight.
  • the user may still be provided with a memory aid that is able to carry out data-hungry processes, such as face recognition, and may receive notifications in real-time, or near real-time, that are relevant to their current context (i.e. environment).
  • each of the DejaView device, the mobile handset and the server running the online application inherently has apparatus comprising a respective processor and memory storing suitable computer program code for one or more programs, and that in each device the memoiy and computer program code is configured to, with the device's processor, cause the apparatus to perform the disclosed methods. It is also emphasised that the present invention extends to respective computer program products that cause respective apparatuses at the DejaView device, the mobile handset, and the server running the online application to perform the disclosed methods.

Abstract

Disclosed is a three-tier architecture for memory assistance. A small unobtrusive, wireless sensing device is dynamically programmed to automatically sense important information related to the wearer's current context and environment. Using an internal rule engine, the device decides when information pertaining to the user is important, and automatically uploads the information to a mobile handset application. The application in turn, analyses the information using extra information assimilated from its sensors, and uploads the complete information set to an online internet service application for further processing. The processing is done online, using other external data sources as well as its own internal processing engine, and returns information: to the user through a notification engine, and to the device and handset through a set of rule updates. Further memory planning can be achieved through a secondary process of planning and review via a Desktop PC/ Laptop.

Description

MEMORY AID
FIELD OF THE INVENTION
The present invention relates to a method of aiding memory, and more specifically to a method of aiding the memory of a user of a mobile device, to respective methods of operating a processor of each of a mobile device and a device for capturing images, to respective computer program products for causing an apparatus to perform the respective methods, to an apparatus comprising a processor and memory including computer program code for one or more programs, and to a device for capturing images. The invention is usable to support sufferers of early stage dementia, and normal age related memory loss.
DESCRIPTION OF THE PRIOR ART
As disclosed in R. K. Mahurm, et al, "Structured Assessment of Independent Living Skills: Preliminary Report of a Performance Measure of Functional Abilities in Dementia," Journal of Gerontology, vol. 46, pp. P58 -P66-P58 -P66, 1991, active recall of ones actions, people's names, places, events and objects is a requirement of daily life. Early stage dementia sufferers are often prone to a decline in active recall, which inhibits confidence, and thus prevents independent living.
Prospective memory, the task of remembering an intention or task, requires both a retrospective component (remembering what is required) and a prospective component ('remembering to remember'). There is substantial evidence that prospective memory, as opposed to retrospective memory, is more difficult for individuals to recall (see F. A. Huppert, et al. , "High prevalence of prospective memory impairment in the elderly and in early-stage dementia: Findings from a population-based study," Applied Cognitive Psychology, vol. 14, pp. S63-S81-S63-S81, 2000, hereinafter Huppert 1, and A. J. Sellen, et al, "What brings intentions to mind? An in situ study of prospective memory," Memory, vol. 5, pp. 483-507, 1997). Moreover, there is evidence in F. A. Huppert and L. Beardsall, "Prospective memory impairment as an early indicator of dementia," Journal of Clinical and Experimental Neuropsychology, vol. 15, pp. 805-805, 1993, hereinafter Huppert 2, that people with minimal dementia perform similarly with prospective memory as do people with moderate dementia, thus malting prospective memory decline a good indicator of early stage dementia.
As prospective memory is a combination of remembering (future) intention and (past) steps/tasks, it is important to support a person in both of these activities. In Huppert 1 it is argued that the intention is often remembered, but practically fulfilling the tasks timeously is not achieved through forgetting the required tasks, a case which is worsened with mild dementia patients.
In F. I. M. Craik, "A functional account of age differences in memory," 1986, pp. 409-409, there is presented a framework for understanding the relationship of age to memory tasks where a person's performance can be judged by two factors: external factors including cues and context, and the type of mental operation required. Craik argues that information retrieval (active recall of retrospective memory) decreases with age, while age-related deficits are reduced when environmental support is high.
Memos, a system developed to assist prospective memory using a two tier, three component model is presented in A. I. T. Thone-Otto and K. Walther, "How to design an electronic memory aid for brain-injured patients: Considerations on the basis of a model of prospective memory," International Journal of Psychology, vol. 38, pp. 236-236, 2003. The Memos system addresses a model first presented in J. Ellis, "Prospective Memory or the Realization of Delayed Intentions: A Conceptual Framework for Research," Prospective memory: theory and applications, 1996, which describes five stages of intention: encoding, delay, performance retrieval, execution, and evaluation. The Memos system uses a Personal Memory Assistant (PMA) to remind a Brain-Injury patient of events and tasks at an appropriate time and allows for guidance through the activity to complete the intention.
C. Buiza, et ah, "HERMES: Pervasive Computing and Cognitive Training for Ageing Well," Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living, pp. 756-763, 2009 discloses a system called HERMES. The HERMES architecture describes a set of environmental sensors, mcluding indoor and outdoor cameras, to a tier of analysis engines through a system controller and a middleware layer (Chilix). Two separate databases store information. The first is a relational database storing the application data, while the second is a database to infer new knowledge, validate meta-data and apply rules using a rule engine.
Electronic memory assistive devices, such as the Microsoft SenseCam, as discussed in S. Hodges, et al, "SenseCam: A Retrospective Memory Aid," ed, 2006, pp. 177-193, have been developed to assist retrospective autobiographical memory through a retrospective view of an accumulation of images taken from a patient's point of view in a process called "lifelogging". These images are then viewed in a retrospective session with a carer - a technique similar to a user diarising all their past daily events, which is a common practise used currently by carers of dementia patients. In E. Berry, et al, "The use of a wearable camera, SenseCam, as a pictorial diary to improve autobiographical memory in a patient with limbic encephalitis: A preliminary report," Neuropsychological Rehabilitation, vol. 17, pp. 582-601, 2007, this technique is proven to assist a patient's autobiographical memory. Images are captured by the SenseCam through the use of sensor and time triggers resulting in huge image datasets. These image datasets can be so large, that filtering out the relevant/useful information has been a research topic for the last few years, as discussed in A. Smeaton, "Content vs. Context for Multimedia Semantics: The Case of SenseCam Image Structuring," ed, 2006, pp. 1-10.
Although large datasets in lifelogging systems are computationally intensive to sift through, in M. L. Lee and A. K. Dey, "Providing good memory cues for people with episodic memory impairment," Tempe, Arizona, USA, 2007, pp. 131-138 it is shown that specific memory cues are very successful in reminding users of current and present experiences. These cues are classified into person, action, object, and place. However, while the SenseCam and other lifelogging technologies discussed in J. Healey and R. W. Picard, "Startlecam: A cybernetic wearable camera," 1998, pp. 42-42, and S. Mann, "WearCam'(The Wearable Camera): Personal Imaging Systems for long-term use in wearable tetherless computer-mediated reality and personal Photo/Videographic Memory Prosthesis," 1998, pp. 124-124 do facilitate successful retrospective autobiographical memory cues, their use as a memory aid is limited as there is no method for real time feedback to the user in assisting them with their current task.
Accordingly, there is a need for an improved memory aid. In particular, there is a need for an improved memory aid that is capable of providing feedback to a user in assisting them with their current task. There is also a need for a memory aid that is capable of providing complex image and data analysis and processing, yet which is not too cumbersome for a user to use. There is further a need for a memory aid that overcomes or obviates the problems described above.
SUMMARY OF THE INVENTION
The present invention aims to provide an improved memory aid. In particular the present invention aims to provide an improved memory aid that is that is capable of providing feedback to a user in assisting them with a task they are currently performing. The present invention also aims to provide a memory aid that is capable of providing complex image and data analysis and processing. The present invention also aims to provide a memory aid that is not to arduous or cumbersome for a user to use.
A first aspect of the present invention provides a method of operating a processor of a mobile device, the method comprising: receiving information comprising image data defining an image; causing contextual data and data defining the image to be uploaded to a remote system; and communicating to a user of the mobile device data associated with their current context, on the basis of data received from the remote system.
A second aspect of the present invention provides a computer program product for causing an apparatus to perform the method of the first aspect.
A third aspect of the present invention provides a method of aiding the memory of a user of a mobile device, comprising: receiving, from a mobile device, information comprising contextual data and image data defining an image; processing the received information; and sending, to the mobile device, data associated with the user's current context on the basis of a result of the processing.
A fourth aspect of the present invention provides a method, comprising: receiving, from a mobile device, information comprising (a) image data defining an image captured by a camera of a device and one or more of (b) sensor data from one or more sensors of the device, (c) data indicating one or more rules of a set of capture rules which triggered capture of the image, and (d) contextual data; evaluating whether the set of capture rules needs updating by processing the received information; and sending, to the mobile device, rule update information when it is determined that the set of rules needs updating.
A fifth aspect of the present invention provides a computer program product for causing an apparatus to perform the method of one of the third aspect and the fourth aspect.
A sixth aspect of the present invention provides a method of aiding the memory of a user of a mobile device, the method comprising: receiving at the mobile device, from a peripheral device, information comprising image data defining an image; causing information comprising contextual data and data defining the image to be uploaded from the mobile device to a remote system; comparing, at the remote system, the information received to internal and/or external data sets; sending, from the remote system to the mobile device, data associated with the user's current context on the basis of a result of the comparing; and communicating to a user of the mobile device data associated with their current context, on the basis of the data from the remote system.
A seventh aspect of the present invention provides a method of operating a processor of a device that comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor, wherein the method comprises: compiling information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and causing the transmitter to send the information towards a second device.
An eighth aspect of the present invention provides a method of operating a processor of a device for capturing images, which device comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor, wherein the method comprises: receiving data from a second device via the receiver; and changing the set of capture rules stored in the memory on the basis of the data received.
A ninth aspect of the present invention provides a computer program product for causing an apparatus to perform the method of one of the seventh aspect and the eighth aspect. A tenth aspect of the present invention provides an apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform: compiling information comprising (a) image data defining an image captured by a camera and one or both of (b) sensor data from one or more sensors and (c) data indicating one or more rules of a set of capture rules which triggered the capture, which set of capture rules define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and causing a transmitter to send the information.
An eleventh aspect of the present invention provides an apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform: receiving data; and changing a set of capture rules stored in memory on the basis of the data received, which set of capture rules define when a camera is to be operated to capture an image on the basis of sensor data from one or more sensors.
A twelfth aspect of the present invention provides a device for capturing images, the device comprising a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to compile information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and to cause the transmitter to send the information towards a second device.
A thirteenth aspect of the present invention provides a device comprising a camera, one or more sensors, a receiver, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to change the set of capture rules stored in the memory on the basis of data received from a second device via the receiver.
Preferred features of the present invention are as defined in the dependent claims. BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings, in which:
Figure 1 is a schematic diagram showing a system embodying a first embodiment of the present invention;
Figure 2 is a diagram showing the transfer of messages between tiers of the system of Figure l;
Figure 3 is a schematic diagram showing components of a wearable device that forms part of the system of Figure 1 and that is in accordance with the first embodiment of the present invention;
Figure 4 is a perspective view showing a prototype of the wearable device of Figure 3;
Figure 5 is a flow diagram illustrating the process of compiling an information shard, as performed at the wearable device of Figure 3 in accordance with the first embodiment of the present invention;
Figure 6 is a schematic diagram showing the structure of a rule database used by the wearable device of Figure 3;
Figure 7 is a schematic diagram showing the different information components of an information shard, as used in the first embodiment of the present invention; and
Figure 8 is a schematic diagram showing the functionality of an internet service application, according to the first embodiment of the present invention.
DESCRIPTION OF PREFERRED EMBODIMENTS
A first embodiment of the present invention will now be described with reference to Figures 1 to 8. Features of possible alternative embodiments will also be discussed. The first embodiment provides a system to facilitate active recall of normal and routine daily activities, people, places and objects, and cue a user with associated data related to these. Prospective Memory, known as 'remembering to remember' (as discussed in E. Winograd, "Some observations on prospective remembering," Practical aspects of memory: Current research and issues, vol. 1, pp. 348-353, 1988) is the act of remembering an intention or action. We define prospective memory support as the action of providing a user with memory cues beyond a traditional alarm. By sensing the user's surroundings, the system of the first embodiment provides contextual cues (which have been processed online), to assist the user with the intention of remembering future events. An example message of the system would be: ' You are talking to Miss Smith, whom you last spoke to on Wednesday afternoon at work '. In the first embodiment, this is done through a three-tier assistive technological system to sense, process and assist the user's real-time experiences.
The three tier system architecture, of which each tier fulfils a unique role based on its abilities and constraints comprises of three components: an external periphery sensing device (herein called the DejaView Device), an Internet connected mobile telephone handset application, and an online Internet service application which is in turn connected to the users online presence and social networks. The system assists in active recall by analysing what the user is currently experiencing or dealing with and feeding back relevant information to the user. This automatic cueing reminds the user of features within the current environment, data associated with the current context, and information related to the current context from the user's social networks and online Internet presence. The system is intended to support sufferers of early stage dementia, and normal age related memory loss. This pervasive, context-aware sensing, processing and automated user feedback system is called "DejaView" herein.
The conceptual design for this system is a portable, unobtrusive and continuous use memory aid. The system attempts to remind users of important factors in their current environment classified into cues relating to people, places, objects and actions. The concept of the system has a number of significant advantages over existing devices, representing a step change in the capability of technology-based memory aids. These include: a) a low-power, wearable, intelligent device which autonomously captures images and sensed information to efficiently cue autobiographical memories, b) a web-enabled, wireless system that integrates with the user's mobile handset to add contextual information provides feedback, and link the DejaView device to the Internet, c) the automatic annotation and analysis of images, using multiple distributed databases, allowing the system to effectively present the user with the contextual cues of relevant images, and allow further effective querying from a knowledge repository, and d) supporting memory in a periodic planning process, and also as a real-time memory aid.
To illustrate the functioning of the device, a walkthrough example is presented:
Mr Jefferies wakes up in the morning by his alarm clock on his mobile handset. The reminder message alerts him to remember to put on his DejaView Device. Having put on his clothing, and DejaView Device, he walks to the kitchen and turns on the kettle. As he is preparing his cup of tea, his doorbell rings, which he goes to answer. It is his neighbour, asking him whether she can borrow a cup of sugar. He can 't remember her name, so as he fetches the sugar, he glances down at his mobile phone, and sees that the Deja View Device has taken several images already, uploaded them, analysed them, and is informing him that he was talking to Mrs Jones who lives next door. Having fetched the sugar and giving it to Mrs Jones, he asks her how Mr Jones is doing. They converse for a short while, and afterwards he heads back indoors and sits down on his chair. As he looks down at his mobile handset, he sees that he was making a cup of tea, and heads back into the kitchen to continue making his cup of tea. Having had his breakfast, Mr Jefferies starts walking to the comer shop for his weekly groceries. As he leaves the house, and starts walking away from his car, his phone vibrates, and a message is displayed on his phone reminding him that he has a doctor 's appointment in twenty minutes, which he will be late for if does not drive there soon. Mr Jefferies decides to postpone his shopping and go to the Doctor instead. That evening, whilst at home, Mr Jefferies reviews his calendar and labels the images related to his Doctor as his Doctor. He adds the date of his next appointment. The overall architecture of the system embodying the first embodiment of the invention can be seen in Figure 1. The transfer of the communication messages between the three tiers of the system is shown in Figure 2.
Data from the wearable device is processed and classified on an energy efficient, lightweight processor which assesses the value of the information. Important, valuable information is transmitted to the mobile handset as classified by a set of rules called the 'capture' rules. The mobile handset then further classifies and processes the information, appends further contextual data, such as data resulting from a feature detection process or data (e.g. GPS data) indicating the geographic location of the DejaView device and/or the mobile handset, and uploads relevant information to an online internet application for further data analysis, processing, storage and offline review. The information online is also compared to the users online Internet presence (Activities such as: Checking their online calendar to ensure that they are not far from a pre-arranged appointment; checking their social networks to see whether they are in contact with someone currently; comparing images of people they are currently talking to with images of people they know, such as by comparing the captured image with images previously captured and stored by the remote system). Information extracted and evaluated by this process is then compared to a table specifying rules of which to contact the user (Notification rules), and then the user is then informed as required by these rules. Updated and processed information is then also transmitted back to the handset for communication to the user, along with rule updates to the periphery device and handset.
The mobile handset is used as the primary interface to the whole system, although a secondary interface is also used to address the periodic system feedback and memory planning. Using the mobile handset as the primary user interface allows a high level of system mobility and interoperability.
The data communicated using this system is encapsulated into three different objects describing their function. A data object carrying information from the sensor or handset to the online service application is known as an Information 'Shard'. Information objects fed back from the Online Internet Application to the Mobile Handset and the periphery sensing device are known as the 'review rule update' and 'capture rule update' respectively. Data transmitted to the User, or other carer, or interested party is a Notification. Each of the three components of the system has specific role, with clear inputs and outputs to perform its function. The role of the DejaView Device is to sense data autonomously and energy efficiently, and to alert the other system components of the important information. The mobile handset has two roles: a) to communicate the important information to the Online Internet Application, and b) to provide an interface with which the user can interact. The Online Internet application is the data processing centre. It provides all the tools necessary to analyse images and data intensively and rapidly, in order to provide the user with a notification. It also processes the information supplied by the other components to evaluate whether rules need updating. Its final function is to provide a Knowledge repository where all the data is stored for further offline analysis, manual annotation and periodic review/planning. Each of the three components of the system will now be described in turn.
A system overview of the DejaView Device can be seen in Figure 3. The main components are a low-power microcontroller (MCU), a Complex Programmable Logic Device (CPLD), a CMOS camera sensor, a low power static memory (SRAM), a set of low-power sensors, a three-colour led indicator, and a Bluetooth communications radio. In alternative embodiments, the Bluetooth communications radio may be replaced by some other, preferably wireless, transmitter and receiver for communication with the mobile handset. Preferably the MCU turns off the transmitter/receiver when communication with the mobile handset is not required, in order to conserve energy. Preferably the MCU turns off the camera a certain set period of time after an image has been captured, in order to conserve energy.
The DejaView Device is designed to be worn as close as possible to the point of view of the user. A prototype of the DejaView Device can be seen in Figure 4.
To allow the device to be worn closer to the user point-of-view, much higher stringency is required on size and weight of the device, and thus making miniaturisation crucial. As the energy source is most often the largest and heaviest component in the system, and to ensure miniaturisation, the most energy efficient systems are required to limit the required size of energy source. The device may be powered by a lithium polymer battery, charged through a connector such as a micro-USB connector. Each of a 550mAh battery and a 800mAh battery has been found to be suitable. Preferably the device weighs no more than 80 grams. In some embodiments it weights no more than 70 grams. More preferably it weighs no more than 60 grams, and more preferably still it weighs no more than 50 grams.
The set of low-power sensors in this embodiment comprises a microphone, a passive infrared sensor (PIR), a light sensor, an accelerometer, and a compass. In other embodiments, one or more of these sensors may be omitted and/or other sensors may be provided. Sensor data from the one or more low-power sensors indicates characteristic(s) of the environment within which the sensors are disposed, and thus can be considered contextual data, i.e. data representative of the user's current context.
The DejaView Device software is built upon a version of the Unified Framework, discussed in G. V. Merrett, et al. (2006). The Unified Framework for Sensor Networks: A Systems Approach. Available: http://eprints.ecs.soton.ac.uk/12955/, to clearly separate out the energy management from communications and sensing. The low-power sensors may constantly monitor the user's environment and output data, i.e. sensor data. The application level software running on the Unified Model processes the sensor data against an in-memory table of rules to assess the value of the sensor data. When the sensed data is valued to be greater than the in memory rules, the higher power camera sensor is turned on to capture an image. That is, the processor (MCU) compares the output of the one or more low-power sensors present in the device and, when sensor data from one or more of the low-power sensors meets certain criteria set out in one or more of the capture rules, such as having a value that exceeds a certain, predetermined threshold value, the processor (MCU) triggers the camera to capture a photo. This allows the camera, a relatively high-power device, to be turned off until it is determined by the processor that an image should be captured, thus saving power.
Once an image capture is triggered, the sensed information, along with the rules which triggered the capture, are compiled into a data package (an Information Shard) by the processor, and are transmitted to the application running on the mobile handset by way of the processor causing operation of the transmitter. The compilation process is illustrated in Figure 5. Although it is considered that Figure 5 is self-explanatory, for completeness it illustrates all of (a) image data defining the captured image, (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture being compiled into the Information Shard for sending to the mobile handset. In this way, the Information Shard can be considered to describe a self-contained event, based on the low-power sensors' data and the image data. The Information Shard may permit the application running at the mobile handset and/or the application ranning online to determine which rule(s) were used to trigger the capture and the sensor data that triggered the effect of the rule(s).
In alternative embodiments, one or the other of (b) and (c) may be omitted from the Information Shard, such that the Information Shard comprises only (a) and (b) or (a) and (c).
In some embodiments, the sensor data compiled into the Information Shard comprises the sensor data output from all of the low-power sensors at the time of image capture, or at least at the time the camera was triggered to capture the image.
In some embodiments, all of the rules of the set of capture rules are compiled into the Information Shard, so as to indicate the state of the rule set at the time the image was captured.
Still further, in some embodiments other contextual data, such as an indication of the geographic location of the wearable device at the time of image capture, is compiled into the Information Shard to be sent to the mobile handset. In such embodiments, the wearable device would preferably comprise a sensor, such as a global positioning system (GPS) sensor, for determining the geographic location of the device, and the further contextual data would comprise GPS data. However, it is preferred to omit such a geographic location sensor in an effort to minimise the size, weight and power demand of the wearable device.
In some embodiments, the compiling of an Information Shard comprises appending one or both of the sensor data and the data indicating one or more rules as metatags in an image file that defines the captured image.
The application continuously captures images and sensor data according to the 'capture' rule database until the rules are changed. Capture rules can be adaptively changed through feedback from the handset and the online web server. The Rule database is structured as can be seen in Figure 6. Each rule has an identifier ("ID") of the rule in the rule set, an indication of the low-power sensor(s)'s the output value(s) of which the rule applies to ("Sensor"), an algorithm or formula into which the output value(s) of the sensor(s) is input ("Rule") for comparison, an indication of an action to be performed if the analysis results in a positive outcome of the comparison ("Result if true"), an indication of an action to be performed if the analysis results in a negative outcome of the comparison ("Result if false"), and an indication of rule lifetime ("Lifetime") that dictates when the rule is in effect, as discussed in more detail below. A rule may be in effect at all times, or only at certain time(s).
Capture rules can contain variables, and a small set of pre-defined functions such as sum, magnitude, and time-delayed values. The time-delayed values can be viewed as 'taps' in a Finite Impulse Response (FIR) filter to create reasonably complex filters in the rule database. The pre-defined functions are simple, often used functions which allow rules to be created using simple structures. A fixed number of variables, due to the limitation of memory size, can be used within the rules database to allow more complex sets of rules based on a set of individual rules.
The rule lifetime allows rules to last for either a specified short period of time, or a longer more permanent time period. For example, a rule may expire after a set period of time and be deleted from the rule set. Alternatively, a rule may be time specific such that it causes one effect during a first period of time, and a different effect during a second period of time. This allows the Internet Application to enable the DejaView device to tune its sensitivity to capture more/less images when it regards certain time periods to be more relevant to the user to capture images as opposed to other periods of time which it regards the users activity to be less important to capture images.
The processor of the DejaView device changes the set of capture rules on the basis of a 'capture rule update' received from the mobile handset. The processor may change the set of rules by adding a new rule, deleting an existing rule, or amending an existing rule. By way of a change to the rule set, the processor may set a rule that recites a threshold value or a range of values against which a value of sensor data from one or more of the low-power sensors is compared. The set rule may dictate a certain action (e.g. capture image) if the value of the sensor data matches or exceeds the threshold value or falls within the range, respectively, (i.e. "result if true") and dictate a section action (e.g. maintain off state of camera) if the value of the sensor data is less than the threshold value or outside of the range, respectively (i.e. "result if false").
Turning now to the application running on the mobile handset, the application comprises of two main components: a component allowing the user to simply and easily interface with the pervasive system; and a communication component allowing the information gathered by the DejaView Device to be further processed, classified and forwarded on to the Internet Service application.
The Interface component provides the user with a system to value previously captured information, and change current rules using a simple interface. In this way, a capture rule update may be generated and sent to the DejaView device to cause the processor of the device to change the set of capture rules. The interface component also acts as an interface to the Online Internet Service Application allowing the user to link their online presence to the device and handset.
Using the mobile handset as the user interface to the DejaView system removes the requirement for the DejaView device to have its own interface, which again reduces the size, weight and power demand of the wearable device, and also simplifies the user experience.
Communication from the DejaView Device to the Internet Application is performed through the mobile handset, and thus runs as a separate process on the handset. The process waits for data to arrive from the DejaView Device. On receiving information from the DejaView Device, the application analyses the incoming data shard according to a set of rules called the 'refine' rules database. Refine Rules are written in a similar format to the 'capture' rules, but comprise of more complex functions and rule types allowing for basic image processing functions such as edge detection, feature detection and basic face detection (preferably not face recognition). Features detected could include an object, a place, or a person. Once a shard has been processed using the handsets 'refine' rules, the shard is then pushed on to the Online Internet Service Application for final processing, preferably with contextual data generated at the mobile handset appended to the information received at the mobile handset from the DejaView device, such that the Information Shard uploaded from the handset comprises augmented information, as indicated in Figure 2. Such contextual data may be data resulting from the feature detection process that indicates a detected feature, and/or data (e.g. GPS data) indicating the geographic location of the DejaView device and/or the mobile handset.
Figure 7 details the different information components of an information shard. The information contained in the shard is intended to describe a self-contained experience based on the sensors knowledge. From leaving the handset, an information shard should contain the important information (Image Data); why the information is important (Detected Features); the information which it used to make the decision (Sensor Data) and the rules it used to decide on the decision (Rules used). However, as discussed above, the Information Shard sent to the mobile handset from the wearable device may exclude one or other of the sensor data and the rules used. In such cases, the excluded data may also be excluded from the information shard sent from the mobile handset to the remote system. Although the information sent in this shard might appear to be redundant, it is required to ensure the further stages in the pervasive system can understand what rules were used to trigger the capture, and thus make changes to those rules should it decide to do so.
In some embodiments, the mobile handset application may receive image data defining plural images, preferably in plural received Information Shards. The application may analyse the images defined by the image data and discard some of the image data, for example if the image data indicates that the image captured is unclear, perhaps by way of the camera sensor being at least partially obscured at the time of capture. The mobile handset application would then cause data defining only some of the plural images to be uploaded.
Turning now to the Internet Service Application, or remote system, the Online Internet Service (Figure 8) provides a mechanism for long-term backed up data storage, powerful offline data analysis and annotation, a gateway to external online services, such as face recognition services, a service to update the rules databases on both the handset and DejaView Device, a Knowledge repository for periodic review, and a Notification Engine which describes who and how the user, their carer or any other interested party should be contacted.
The Internet service application waits for new information shards to be received from the mobile handset, and on reception processes the data on a rule set for the user called the 'compare' rules. The compare rules, which supersede the capture and refine rules, describe the required comparison of the current information shard to internal and external data sets, as well as mnning intensive processing on the image to extract information which would prove useful to the user in the future (Such as classification of the objects within that environment). Compare rules can contain any subset of capture or refine rules.
The Annotation Engine runs the set of compare rule commands through the data analysis engine, and stores the results in the Knowledge repository. Once stored in the knowledge repository, the results of the data analysis are then forwarded onto the Notification Engine, which parses the annotations into user friendly messages and informs the user (and anyone specified in the notification list) using the protocol specified in the notification list.
On receipt of compare results, the Notification engine runs the annotations and contextual supported data through a notification rule set, describing who the message should go to and how the message should be presented to the user. This information is compiled into a notification which aids the user with their current activity and assists in memory tasks.
Accordingly, as described herein, the online application receives data of an information shard from the mobile handset, analyses the received data to extract information and compare the data with other data, such as image data defining images accessible by the online application (e.g. images stored by a social network service, images previously received from the mobile handset, and/or stock images stored online or elsewhere and accessible by the online application), and provides annotations to the data of the information shard. The online application may combine data from the information shard with other data to which the online application has access, such as data from one or more of an online calendar, a knowledge system, a social networking account, and data defining other images, such as data of an online image gallery, and/or the online application may change data from the information shard on the basis of such other data. As a result of this comparison, combining and/or changing of the data from the information shard with, or on the basis of, such other data, the online application might determine that a certain image defined in the received data comprises an image of a person the user knows, an image of a known article or object such as a kettle, and/or an image of a known place such as the user's front door or kitchen. ' Accordingly, an annotation is provided to the data to indicate at least some of the content of the data. The annotations may, for example, include an indication of the name or other information of a person identified in the image data of the shard, an indication of a place, or an indication of an object. The annotations are then used by the notification engine to generate data of one or more notifications for sending to the mobile handset. The data might include the name or other identifier of a person, place, object or action.
Data defining a notification is then sent to the mobile handset, which provides the user of the handset with a notification according to the data it receives from the online application. The notification(s) are sent according to the notification rule set, which dictates the destination(s) and/or protocol or format of the notification. It is of course implied that the online application will determine that the mobile handset is to be a destination of the notification, on the basis of data indicating an address or identity of the mobile handset comprised in the Information Shard, or in session data accessible to the online application as a result of a session being present between the mobile application and the online application, or on the basis of some other data indicating an address or identity of the mobile handset.
The notification may be provided to the user by the mobile handset by way of the handset displaying a graphic image, a photograph, text, or any combination of these on a display of the handset. Alternatively or additionally, the notification may be provided to the user by the mobile handset by way of the handset or wireless/wired headset emitting an audible sound, such as spoken word(s), to the user. In any event, the data communicated to the user of the mobile device is associated with their current context (i.e. the context or environment they currently find themselves in), such as to provide the user with feedback that can assist them with a task they are currently, or were very recently, performing.
Moreover, as described herein, by processing the data comprised in the Information Shard received at the online application, the online application is able to evaluate whether the set of capture rules needs updating. For example, the online application might determine that the set of capture rules needs updating when it determines that images are being captured by the DejaView device at times of the day when there is little information of interest to be captured, or when it determines that the level of noise in the user's current environment is such that the camera of the DejaView device needs to operate only when the noise exceeds a different threshold. If it is determined that the capture rules need updating, the online application sends rule update information to the DejaView device via the mobile handset. The distributed architecture of the system maintains computational power whilst also minimising energy requirements of the wearable and portable parts of the system. The system, facilitated by the three-tier architecture, analyses context with computationally- intensive processing, performed primarily by the online application and the mobile handset, while observing strict energy requirements for the wearable device. Thus, the wearable device is able to draw a relatively low current, for example 38.5mA, from its onboard power source. This permits, in turn, the wearable device to have only a small and lightweight power source, resulting in a device with small overall dimensions and weight. Nevertheless, the user may still be provided with a memory aid that is able to carry out data-hungry processes, such as face recognition, and may receive notifications in real-time, or near real-time, that are relevant to their current context (i.e. environment). This cueing reminds the user of features within their current environment, data associated with their current context, and information related to their current context.
In an alternative embodiment of the present invention, actions of the DejaView device discussed above are performed at the mobile handset. In effect, a device, preferably a mobile device, and more preferably a mobile telecommunication device, is provided that combines the above-described features and operations of the DejaView device and the mobile handset. According to this alternative embodiment, the device comprises one or more of the above- described low-power sensors and a camera, such as a CMOS camera sensor. Memory at the device stores a set of capture rules that define when the camera of the device is to be operated to capture an image, on the basis of sensor data from the one or more sensors of the device. A processor of the device receives from the camera image data defining the captured image. The processor compiles information comprising image data defining an image captured by the camera and one or both of sensor data from one or more sensors and data indicating one or more rules of the set of capture rules which triggered the capture, and causes the compiled information to be transmitted to the remote system. That is, the processor causes a transmitter of the device to send the information towards the remote system.
Optionally in this alternative embodiment, the processor of the device causes further contextual data to be uploaded to the remote system, substantially as discussed above. For example, the processor may process the image data to detect features in the image, for example faces, and an indication of the features detected may be uploaded to the remote system as contextual data. Alternatively or additionally, when the device comprises a sensor, such as a global positioning system (GPS) sensor, for determining the geographic location of the device, the further contextual data uploaded to the remote system may comprise GPS data.
The device of this alternative embodiment receives notifications from the remote system and provides a user of the device with a notification according to the data it receives from the online application, as discussed above. The notification may be provided to the user visibly and/or audibly, again as discussed above. In any event, the data communicated to the user of the device is associated with their current context (i.e. the context or environment they currently find themselves in), such as to provide the user with feedback that can assist them with a task they are currently, or were very recently, performing.
The device of this alternative embodiment may receiver capture rule updates from the remote system, as the DejaView device does in the first embodiment described above, which cause the processor of the device to update the set of capture rules stored in the memory of the device.
According to this alternative embodiment, relatively little processing of the image data and/or contextual data is carried out at the device. Rather, complex image and data analysis and processing is carried out at the remote system, as described above. Accordingly, this alternative embodiment of the memory aid of the present invention maintains computational power whilst also minimising energy requirements of the user's device, i.e. the preferably portable, and more preferably wearable, part of the system. The system analyses context with computationally-intensive processing, performed primarily by the online application, while observing strict energy requirements for the device. Thus, the device is able to draw a relatively low current, which permits the device to have only a small and lightweight power source, resulting in a device with small overall dimensions and weight. Nevertheless, the user may still be provided with a memory aid that is able to carry out data-hungry processes, such as face recognition, and may receive notifications in real-time, or near real-time, that are relevant to their current context (i.e. environment).
While devices and methods of operation of those devices have been discussed herein, it is emphasised that each of the DejaView device, the mobile handset and the server running the online application inherently has apparatus comprising a respective processor and memory storing suitable computer program code for one or more programs, and that in each device the memoiy and computer program code is configured to, with the device's processor, cause the apparatus to perform the disclosed methods. It is also emphasised that the present invention extends to respective computer program products that cause respective apparatuses at the DejaView device, the mobile handset, and the server running the online application to perform the disclosed methods.
The scope of the invention is not limited by the described embodiments, but only by the appended claims.

Claims

Claims:
1. A method of operating a processor of a mobile device, the method comprising:
receiving information comprising image data defining an image;
causing contextual data and data defining the image to be uploaded to a remote system; and
communicating to a user of the mobile device data associated with their current context, on the basis of data received from the remote system.
2. The method of claim 1 comprising appending the contextual data to the received information to create augmented information, wherein the causing comprises causing the augmented information to be uploaded to the remote system.
3. The method of any preceding claim comprising processing the received image data to detect features in the image.
4. The method of claim 3, wherein the contextual data comprises an indication of the features detected.
5. The method of any preceding claim wherein the information received comprises contextual data.
6. The method of any preceding claim wherein the receiving comprises receiving the information from a peripheral device.
7. The method of any preceding claim wherein the information received comprises one or both of sensor data from one or more sensors and data indicating one or more rules of a set of capture rules which triggered capture of the image by a camera of the peripheral device.
8. The method of any preceding claim wherein the causing comprises causing the received information to be uploaded to the remote system.
9. The method of any preceding claim, wherein the communicating comprises providing the user with a reminder of a feature in their current environment.
10. The method of any preceding claim, wherein the communicating comprises providing the user with a cue relating to one of a person, a place, an object and an action.
11. The method of any preceding claim, wherein the communicating comprises presenting to the user a cue comprising an image relevant to their current context.
12. The method of any preceding claim wherein the mobile device comprises a mobile telecommunication device.
13. The method of any preceding claim wherein the contextual data comprises information indicating a current location of the user, optionally wherein the contextual data comprises global positioning system data.
14. The method of any preceding claim comprising receiving information comprising image data defining plural images, wherein the causing comprises causing data defining only some of the plural images to be uploaded to the remote system.
15. A computer program product for causing an apparatus to perform the method of any one of claims 1 to 14.
16. A method of aiding the memory of a user of a mobile device, comprising:
receiving, from a mobile device, information comprising contextual data and image data defining an image;
processing the received information; and
sending, to the mobile device, data associated with the user's current context on the basis of a result of the processing.
17. The method of claim 16, wherein the processing comprises comparing the received information to internal and/or external data sets.
18. The method of claim 16 or claim 17, wherein the processing comprises checking the user's online presence.
19. The method of any one of claims 16 to 18, wherein the processing comprises one of checking a social network service, checking an online calendar, and comparing the received information to images.
20. The method of any one of claims 16 to 19 wherein the data associated with the user's current context comprises a reminder of a factor in the user's current environment.
21. The method of claim 20, wherein the reminder comprises a cue relating to one of a person, a place, an object and an action.
22. A method, comprising:
receiving, from a mobile device, information comprising (a) image data defining an image captured by a camera of a device and one or more of (b) sensor data from one or more sensors of the device, (c) data indicating one or more rules of a set of capture rules which triggered capture of the image, and (d) contextual data;
evaluating whether the set of capture rules needs updating by processing the received information; and
sending, to the mobile device, rule update information when it is determined that the set of rules needs updating.
23. A computer program product for causing an apparatus to perform the method of any one of claims 16 to 22.
24. A method of aiding the memory of a user of a mobile device, the method comprising: receiving at the mobile device, from a peripheral device , information comprising image data defining an image;
causing information comprising contextual data and data defining the image to be uploaded from the mobile device to a remote system;
comparing, at the remote system, the information received to internal and/or external data sets;
sending, from the remote system to the mobile device, data associated with the user's current context on the basis of a result of the comparing; and
communicating to a user of the mobile device data associated with their current context, on the basis of the data from the remote system.
25. A method of operating a processor of a device that comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor,
wherein the method comprises:
compiling information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and
causing the transmitter to send the information towards a second device.
26. The method of claim 25, wherein the compiling comprises compiling information comprising the image data defining an image captured by the camera, the sensor data from the one or more sensors and the indication of one or more rules of the set of capture rules which triggered the capture.
27. The method of claim 25 or claim 26 comprising capturing the image by turning on the camera on the basis of a result of processing the sensor data from the one or more sensors against the set of capture rules.
28. The method of any one of claims 25 to 27 comprising compiling the information into a data object.
29. The method of any one of claims 25 to 28, wherein the compiling comprises appending the one or both of the sensor data and the data indicating one or more rules as metatags in an image file.
30. The method of any one of claims 25 to 29, comprising compiling information comprising image data defining an image captured by the camera and sensor data from the one or more sensors and other contextual data.
31. The method of any one of claims 25 to 30 comprising turning off the camera a predetermined set period of time after the image has been captured.
32. A method of operating a processor of a device for capturing images, which device comprises a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and the processor,
wherein the method comprises:
receiving data from a second device via the receiver; and
changing the set of capture rules stored in the memory on the basis of the data received.
33. The method of claim 32 wherein the changing comprises setting a threshold against which sensor data is compared.
34. The method of claim 32 or claim 33 wherein the changing comprises setting a period of time during which a rule applies.
35. The method of any one of claims 25 to 34 wherein one or more of the capture rules lasts for a specified period of time.
36. The method of any one of claims 25 to 35 wherein the set of capture rules define that the camera is operable to capture more images during a certain time period than during another period of time.
37. The method of any one of claims 25 to 36 wherein the sensor data comprises data from one or more of a microphone, a passive infrared sensor (PIR), a light sensor, an accelerometer, and a compass.
38. A computer program product for causing an apparatus to perform the method of any one of claims 25 to 37.
39. An apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform:
compiling information comprising (a) image data defining an image captured by a camera and one or both of (b) sensor data from one or more sensors and (c) data indicating one or more rules of a set of capture rules which triggered the capture, which set of capture mles define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and
causing a transmitter to send the information.
40. An apparatus comprising a processor and memory including computer program code for one or more programs, the memory and the computer program code configured to, with the processor, cause the apparatus at least to perform:
receiving data; and
changing a set of capture rules stored in memory on the basis of the data received, which set of capture rules define when a camera is to be operated to capture an image on the basis of sensor data from one or more sensors.
41. An apparatus according to claim 39 or claim 40, comprising a microcontroller.
42. A device for capturing images, the device comprising a camera, one or more sensors, a transmitter, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to compile information comprising (a) image data defining an image captured by the camera and one or both of (b) sensor data from one or more of the sensors and (c) data indicating one or more rules of the set of capture rules which triggered the capture, and to cause the transmitter to send the information towards a second device.
43. A device comprising a camera, one or more sensors, a receiver, memory storing a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors, and a processor that is configured to change the set of capture rules stored in the memory on the basis of data received from a second device via the receiver.
44. The device of claim 42 or claim 43, wherein the memory stores a set of capture rules that define when the camera is to be operated to capture an image on the basis of sensor data from the one or more sensors and other contextual data.
45. The device of any one of claims 42 to 44, wherein the camera comprises a complementary metal oxide semiconductor (CMOS) camera sensor.
46. The device of any one of claims 42 to 45, wherein the sensors comprise one or more of a microphone, a passive infrared sensor (PIR), a light sensor, an accelerometer, and a compass.
47. The device of claim 42, wherein the transmitter comprises a wireless transmitter, optionally a Bluetooth transmitter.
48. The device of any one of claims 42 to 47, wherein the device is portable, optionally wherein the device is wearable by a person.
49. The device of any one of claims 42 to 48 wherein the device weighs no more than 80 grams, optionally wherein the device weighs no more than 70 grams, optionally wherein the device weighs no more than 60 grams, optionally wherein the device weighs no more than 50 grams.
PCT/EP2011/066049 2010-09-15 2011-09-15 Memory aid WO2012035119A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1015349.2 2010-09-15
GBGB1015349.2A GB201015349D0 (en) 2010-09-15 2010-09-15 Memory device

Publications (1)

Publication Number Publication Date
WO2012035119A1 true WO2012035119A1 (en) 2012-03-22

Family

ID=43065207

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/066049 WO2012035119A1 (en) 2010-09-15 2011-09-15 Memory aid

Country Status (2)

Country Link
GB (1) GB201015349D0 (en)
WO (1) WO2012035119A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014140853A3 (en) * 2013-03-15 2014-12-24 Orcam Technologies Ltd. Apparatus and method for automatic action selection based on image context
WO2015001400A1 (en) * 2013-07-03 2015-01-08 Eron Elektronik Bilgisayar Ve Yazilim Sanayi Tic. Ltd. Sti. A triggering system
WO2014115040A3 (en) * 2013-01-23 2016-01-07 Orcam Technologies Ltd. Apparatus for processing images to prolong battery life
KR101584685B1 (en) * 2014-05-23 2016-01-13 서울대학교산학협력단 A memory aid method using audio-visual data
US10789255B2 (en) * 2018-03-27 2020-09-29 Lenovo (Singapore) Pte. Ltd. Presenting data chunks for a working memory event

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040986A1 (en) * 2000-05-12 2001-11-15 Koninlijke Philips Electronics N.V. Memory aid
GB2398402A (en) * 2003-02-17 2004-08-18 Comm Res Lab Providing contextual information to aid a user suffering memory loss
GB2403365A (en) * 2003-06-27 2004-12-29 Hewlett Packard Development Co Camera having behaviour memory
EP1793580A1 (en) * 2005-12-05 2007-06-06 Microsoft Corporation Camera for automatic image capture having plural capture modes with different capture triggers
US20080133697A1 (en) * 2006-12-05 2008-06-05 Palm, Inc. Auto-blog from a mobile device
US20100046842A1 (en) * 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010040986A1 (en) * 2000-05-12 2001-11-15 Koninlijke Philips Electronics N.V. Memory aid
GB2398402A (en) * 2003-02-17 2004-08-18 Comm Res Lab Providing contextual information to aid a user suffering memory loss
GB2403365A (en) * 2003-06-27 2004-12-29 Hewlett Packard Development Co Camera having behaviour memory
EP1793580A1 (en) * 2005-12-05 2007-06-06 Microsoft Corporation Camera for automatic image capture having plural capture modes with different capture triggers
US20080133697A1 (en) * 2006-12-05 2008-06-05 Palm, Inc. Auto-blog from a mobile device
US20100046842A1 (en) * 2008-08-19 2010-02-25 Conwell William Y Methods and Systems for Content Processing

Non-Patent Citations (18)

* Cited by examiner, † Cited by third party
Title
A. I. T. TH6NE-OTTO, K. WALTHER: "How to design an electronic memory aid for brain-injured patients: Considerations on the basis of a model of prospective memory", INTERNATIONAL JOURNAL OF PSYCHOLOGY, vol. 38, 2003, pages 236 - 236
A. SMEATON: "Content vs. Context for Multimedia Semantics: The Case of SenseCam Image Structuring", 2006, pages: 1 - 10
C. BUIZA ET AL.: "HERMES: Pervasive Computing and Cognitive Training for Ageing Well", DISTRIBUTED COMPUTING, ARTIFICIAL INTELLIGENCE, BIOINFORMATICS, SOFT COMPUTING, AND AMBIENT ASSISTED LIVING, 2009, pages 756 - 763, XP019120094
DE JAGER D ET AL: "A low-power, distributed, pervasive healthcare system for supporting memory", PROCEEDINGS OF THE INTERNATIONAL SYMPOSIUM ON MOBILE AD HOC NETWORKING AND COMPUTING (MOBIHOC) ; MOBILEHEALTH'11 (1ST ACM MOBIHOC WORKSHOP ON PERVASIVE WIRELESS HEALTHCARE, MOBILEHEALTH'11 - IN CONJUNCTION WITH MOBIHOC 2011 CONFERENCE, ACM, USA, 1 January 2011 (2011-01-01), pages 1 - 7, XP008147930, ISBN: 978-1-4503-0780-2, DOI: 10.1145/2007036.2007043 *
E. BERRY ET AL.: "The use of a wearable camera, SenseCam, as a pictorial diary to improve autobiographical memory in a patient with limbic encephalitis: A preliminary report", NEUROPSYCHOLOGICAL REHABILITATION, vol. 17, 2007, pages 582 - 601
E. WINOGRAD: "Some observations on prospective remembering", PRACTICAL ASPECTS OF MEMORY: CURRENT RESEARCH AND ISSUES, vol. 1, 1988, pages 348 - 353
F. A. HUPPERT ET AL.: "High prevalence of prospective memory impairment in the elderly and in early-stage dementia: Findings from a population-based study", APPLIED COGNITIVE PSYCHOLOGY, vol. 14, 2000, pages S63 - S81,S63-581
F. A. HUPPERT, L. BEARDSALL: "Prospective memory impairment as an early indicator of dementia", JOURNAL OF CLINICAL AND EXPERIMENTAL NEUROPSYCHOLOGY, vol. 15, 1993, pages 805 - 805
F. I. M. CRAIK, A FUNCTIONAL ACCOUNT OF AGE DIFFERENCES IN MEMORY, 1986, pages 409 - 409
G. V. MERRETT ET AL., THE UNIFIED FRAMEWORK FOR SENSOR NETWORKS: A SYSTEMS APPROACH, 2006, Retrieved from the Internet <URL:eprints.ecs.soton.ac.ukl12955>
HUPPERT 1, A. J. SELLEN ET AL.: "What brings intentions to mind? An in situ study of prospective memory", MEMORY, vol. 5, 1997, pages 483 - 507
J. ELLIS: "Prospective Memory or the Realization of Delayed Intentions: A Conceptual Framework for Research", PROSPECTIVE MEMORY: THEORY AND APPLICATIONS, 1996
J. HEALEY, R. W. PICARD, STARTLECAM: A CYBERNETIC WEARABLE CAMERA, 1998, pages 42 - 42
M. L. LEE, A. K. DEY: "Tempe, Arizona, USA", 2007, article "Providing good memory cues for people with episodic memory impairment", pages: 131 - 138
R. K. MAHURIN ET AL.: "Structured Assessment of Independent Living Skills: Preliminary Report of a Performance Measure of Functional Abilities in Dementia", JOURNAL OF GERONTOLOGY, vol. 46, 1991, pages 58,66,58,66
S. HODGES ET AL.: "SenseCam: A Retrospective Memory Aid", pages: 177 - 193
S. MANN, WEARCAM'(THE WEARABLE CAMERA): PERSONAL IMAGING SYSTEMS FOR LONG-TERM USE IN WEARABLE TETHERLESS COMPUTER-MEDIATED REALITY AND PERSONAL PHOTO/VIDEOGRAPHIC MEMORY PROSTHESIS, 1998, pages 124 - 124
STEVE HODGES ET AL: "SenseCam: A Retrospective Memory Aid", 1 January 2006, UBICOMP 2006: UBIQUITOUS COMPUTING LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 177 - 193, ISBN: 978-3-540-39634-5, XP019040471 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014115040A3 (en) * 2013-01-23 2016-01-07 Orcam Technologies Ltd. Apparatus for processing images to prolong battery life
WO2014140853A3 (en) * 2013-03-15 2014-12-24 Orcam Technologies Ltd. Apparatus and method for automatic action selection based on image context
US9436887B2 (en) 2013-03-15 2016-09-06 OrCam Technologies, Ltd. Apparatus and method for automatic action selection based on image context
WO2015001400A1 (en) * 2013-07-03 2015-01-08 Eron Elektronik Bilgisayar Ve Yazilim Sanayi Tic. Ltd. Sti. A triggering system
KR101584685B1 (en) * 2014-05-23 2016-01-13 서울대학교산학협력단 A memory aid method using audio-visual data
US9778734B2 (en) 2014-05-23 2017-10-03 Seoul National University R&Db Foundation Memory aid method using audio/video data
US10789255B2 (en) * 2018-03-27 2020-09-29 Lenovo (Singapore) Pte. Ltd. Presenting data chunks for a working memory event

Also Published As

Publication number Publication date
GB201015349D0 (en) 2010-10-27

Similar Documents

Publication Publication Date Title
US11607182B2 (en) Voice controlled assistance for monitoring adverse events of a user and/or coordinating emergency actions such as caregiver communication
US11417427B2 (en) System and method for adapting alarms in a wearable medical device
US20160118044A1 (en) Mobile thought catcher system
US10051410B2 (en) Assist device and system
AU2010300097B2 (en) Tracking system
US20180144101A1 (en) Identifying diagnosis-relevant health information
US20150172441A1 (en) Communication management for periods of inconvenience on wearable devices
US20150086949A1 (en) Using user mood and context to advise user
US7716153B2 (en) Memory assistance system comprising of a signal processing server receiving a media signal and associated data relating to information to be remembered and processing the input signal to identify media characteristics relevant to aiding user memory
WO2012035119A1 (en) Memory aid
US20080162555A1 (en) Active lifestyle management
Page et al. Research directions in cloud-based decision support systems for health monitoring using Internet-of-Things driven data acquisition
WO2020098119A1 (en) Acceleration identification method and apparatus, computer device and storage medium
US20220400321A1 (en) Oral care monitoring and habit forming for children
CN112603327B (en) Electrocardiosignal detection method, device, terminal and storage medium
WO2016151494A1 (en) Environment-based pain prediction wearable
Stavropoulos et al. Multi-sensing monitoring and knowledge-driven analysis for dementia assessment
FR3008300A1 (en) DEVICE FOR MONITORING A PHYSIOLOGICAL CONDITION AND ALERT THROUGH AN INTELLIGENT CLOTHING WITH INTEGRATED BIOMETRIC SENSORS, AN APPLICATION AND A CLOUD SYSTEM
Yoshihara et al. Life Log Visualization System Based on Informationally Structured Space for Supporting Elderly People
KR102297596B1 (en) System for detecting risks in real time
KR101054061B1 (en) How to search for services based on your physical condition
Santos et al. On the development strategy of an architecture for e-health service robots
Chaczko et al. Applications of cooperative WSN in homecare systems
Lutze et al. Connected Ambient Assistance
Weerasinghe et al. Predicting and Analyzing Human Daily Routine Using Machine Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11758199

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11758199

Country of ref document: EP

Kind code of ref document: A1