GB2517998A - Apparatus for collating content items and associated methods - Google Patents

Apparatus for collating content items and associated methods Download PDF

Info

Publication number
GB2517998A
GB2517998A GB1316047.8A GB201316047A GB2517998A GB 2517998 A GB2517998 A GB 2517998A GB 201316047 A GB201316047 A GB 201316047A GB 2517998 A GB2517998 A GB 2517998A
Authority
GB
United Kingdom
Prior art keywords
user
current
contextual
content data
context
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1316047.8A
Other versions
GB201316047D0 (en
Inventor
Antti Kuivamaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Oyj
Original Assignee
Nokia Oyj
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Oyj filed Critical Nokia Oyj
Priority to GB1316047.8A priority Critical patent/GB2517998A/en
Publication of GB201316047D0 publication Critical patent/GB201316047D0/en
Publication of GB2517998A publication Critical patent/GB2517998A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/487Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/489Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using time information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/535Tracking the activity of the user
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Computer Hardware Design (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

A method for collating content items comprises determining a current context of a user of an electronic device (e.g. climbing, at a restaurant, current activity or situation or environment, users current motion or direction characteristics etc.), collating a plurality of contextual content data (e.g. image, photograph, current location data, current weather data, recently played music, audio or movie content etc.) associated with the user's determined current context for provision as a single contextual output. The single contextual output may comprise a composite image (e.g. a real-time virtual postcard-style montage) formed from a plurality of contextual content data. The user may send the contextual output to a third party, such as posting the output as a social media entry as an e-postcard-type item.

Description

APPARATUS FOR COLLATING CONTENT ITEMS AND ASSOCIATED METHODS
Technical Field
The present disclosure relates to user interfaces, associated methods, computer programs and apparatus. Certain disclosed examples may relate to portable electronic devices, for example so-called hand-portable electronic devices which may be hand-held in use (although they may be placed in a cradle in use). Such hand-portable electronic devices include so-called Personal Digital Assistants (PDAs), mobile telephones, smartphones and other smart devices, and tablet PC5.
The portable electronic devices/apparatus according to one or more disclosed examples may provide one or more audio/text/video communication functions (e.g. tele-communication, video-communication, and/or text transmission (Short Message Service (SMS)JMultimedia Message Service (MMS)/e-mailing) functions), interactive/non-interactive viewing functions (e.g. web-browsing, navigation, TV/program viewing functions), music recording/playing functions (e.g. MP3 or other format and/or (FM/AM) radio broadcast recording/playing), downloading/sending of data functions, image capture function (e.g. using a (e.g. in-built) digital camera), and gaming functions.
Background
A user may wish to send a message to a contact describing what they are doing. For example, a user may wish to post some news on a social media site, or tell friends about an event he/she is participating in.
The listing or discussion of a prior-published document or any background in this specification should not necessarily be taken as an acknowledgement that the document or background is part of the state of the art or is common general knowledge. One or more examples of the present disclosure may or may not address one or more of the
background issues.
Summary
In a first example there is provided an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on a determined current context of a user of an electronic device, collate a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output.
A user may have a particular current context. For example, a user may currently be climbing a mountain, dancing at a nightclub, eating at a restaurant with family, or relaxing at a spa. The apparatus is configured to collate contextual content data items which are related to the user's current context. The user's context may be thought of as the user's current environment, current situation, current activity, or a current occurrence of a repeated user habit.
In the example of a user climbing the mountain, contextual content data may include a photograph of the mountain range, a photograph of the user, the user's current location (e.g., mountain, town, country), the user's current altitude, the weather conditions, and a textual message composed by the user such as "Getting to the top for example. The apparatus is configured to collate different contextual content data items for provision as a single contextual output. In certain examples the single contextual output may be, for example, a postcard-style image with a montage of images of the user and a mountain range, overlaid with the user's textual message, and graphics relating to the user's location and altitude. Thus a consolidated and visually appealing item may be created by automatically collating information from, for example, different sources available to the apparatus.
The apparatus may be configured to collate the plurality of contextual content data for provision as a single contextual output to one or more of a third party device and the user of the electronic device. For example, the user may wish to store the single contextual output as a memento of a particular event, similar to an electronic diary entry for the user.
The user may wish to send the single contextual output to a third party, such as posting the output as a social media entry as an "e-postcard"-type item, sending the output as a response to a received communication, or publishing the single contextual output on the internet, for example.
The current context may comprise one or more of: a current user location, a current user activity, a current user environment, and a milestone associated with the user. For example, a current user location may be a country, city, town, street or building; a current user activity may be cycling, dancing at a nightclub, or working; a current user environment may be noisy (e.g., in a factory), quiet (e.g., on a secluded beach), busy (e.g., at a children's party) or peaceful (e.g., at a spa); and a milestone associated with the user may be a birthday, Christmas, an anniversary, a new job, or the birth of a child.
The apparatus may be configured to determine the current context of a user using a user input indication of the current context, wherein the user input indication comprises one or more of: an out-of-office current context input indication; an unavailable current context input indication; a particular current user activity context input indication; a particular current user location context input indication; and a particular current social media status context indication. For example, a user may not wish to, or be able to, answer an incoming telephone call, and so may activate an "out-of-office" or "unavailable" automatic response mode for responding to incoming calls (or other messages from third parties).
As another example, a user may input to his device that he is about to go for a run (or the device may determine the user is running), and this input may cause the device to determine the current context of the user as "going for a run". As another example, a user's device may receive an input that the current location has changed to a different town (e.g., London), and this change in location may be detected causing the determination of the user's current context as "now located in London". As another example a user may change his social media status or log in to his social media account, and this may trigger the apparatus to determine the current context of the user, for example based on the content of a new social media status (e.g., cooking a meal, eating in a restaurant in London, or checking in to a particular location using a social media application).
The apparatus may be configured to determine the current context of a user using an automatic determination of a current activity being performed by the user, wherein the automatic determination comprises one or more of determination of the current activity by the electronic device or an apparatus associated with the electronic device. For example, the apparatus may receive signalling from a location determination device (e.g., a global positioning system (GPS) device) that indicates that the user's current activity is travelling by train (for example, from the user's speed and current location being associated with a rail route). The user's current context of travelling by train may thus be determined by the apparatus.
The apparatus may be configured to determine the current context of a user using an automatic determination of a current activity being performed by the user, wherein the automatic determination comprises one or more of determination of motion characteristics and determination of current user calendar event data by the electronic device or an apparatus associated with the electronic device. For example, the apparatus may receive signalling from a pedometer indicating that the user has started the activity of running, and thus the apparatus receives an indication that the user's current context is running. As another example, the apparatus may receive information from a calendar application indicating that the user is currently scheduled to attend a birthday party, and thus the user's current context has changed to attending a birthday party.
The contextual content data may comprise one or more of: pre-stored image content data (such as photographs/movies taken by the user, or portions of the same, including movie clips or movie stills), pre-stored sound content data (such as music files), pre-stored textual content data (such as user-composed messages), current location data, current time data, current weather data, and data representing recently played music. Other pre-stored content which may be collated includes information extracted from websites, such as logos, timetables, and news items. Certain current information may be considered to be sensor information, for example if the apparatus/device is capable of receiving sensor information providing the current temperature (from a thermometer), location and/or speed (from a location device such as a GPS system), or activity (from a gyroscope). Such information may be collated in a single contextual output.
The pre-stored data may be stored on the electronic device or may be available to the electronic device from a remote source, for example at a remote server or cloud accessible by the apparatus.
The single contextual output may comprise one or more of: a composite image formed from the plurality of contextual content data; an image annotated with text, formed from image content data and textual content data; an image with an audio file, formed from image content data and audio content data; and an image with a movie file, formed from image content data and movie content data. The image, audio and movie content may not necessarily be pre-stored but may be current, such as music or sound playing at the user's current location, a movie which the user is currently recording (and once recorded, may be collated in a single contextual output), or a movie still or movie clip from a movie currently being watched or recorded.
The apparatus may be configured to allow user input of new data to be added to the collated contextual content data for provision as the single contextual output to the third party device. Thus, for example, a user may have data already collated which includes a current date, time, location, weather conditions, and a photograph of the user. The user may be able, for example, to take a new picture of the people the user is currently with, and this new photograph may be added to the other previously collated contextual content data for provision together as a single contextual output.
The single contextual output may comprise pre-stored collated content data, the pre-stored collated content data stored prior to the determination of the current context of the user. Thus, for example, the apparatus may collate and create a single contextual output from data already available to the apparatus (such as pre-stored photographs and/or text) without requiring the user to record any new data specifically to allow the collation/creation to be performed.
The apparatus may be configured to collate the contextual content data by matching metadata for respective pre-stored contextual content data with the determined current context of the user. For example, the user's current context may be determined to be "party with Angelo" taken from a user's calendar entry. The apparatus may be configured to search for content items having metadata matching "party" and/or "Angelo" to find, for example, a textual message "Party time!" and photographs of the user with Angelo.
The metadata may comprise a metadata label associated with pre-stored content data, such as a personal name, a location name, an activity type, a location name/type, a year, an occasion name, a subject of an image, a sound file name/metadata label, and a video file name/metadata label. The metadata may comprise a word and/or phrase within pre- stored contextual text content data, such as a word or words within a phrase, or pre-composed text. The metadata may comprise a person's name, a place name, an event name, a song title, an album title, an artist name, a date, or a time, for example.
The contextual content data may comprise user generated data, the data generated during a previous context occasion for the user. For example, the user may take photographs of family at a family reunion, and those photographs may be considered to match the context of a later, different, family reunion which the user attends.
The apparatus may be configured to determine the current context of a user.
The apparatus may be configured to determine the current context of the user by using one oi more of: a user input indication of the current context, and an automatic determination of a current activity being performed by the user. For example, the apparatus may receive user input indicating the current context, or the apparatus may be able to detect the current context of the user. For example, a microphone may detect high noise levels, determine the users location to match that of a nightclub, and determine the time to be late at night. Thus the apparatus may determine the user is dancing at the nightclub.
The apparatus may be configured to check the determined current context against a predetermined number of contexts and, based on matching the determined current context against one or more of the predetermined number of contexts, collate the plurality of contextual content data associated with the users determined current context for provision as the single contextual output to the third party device. This may occur at particular time instances. Thus, for example, the apparatus need not continuously sense/detect each change in the user's current context and based on any change, collate contextual content data for provision of a single contextual output. The apparatus may, for example, only collate current contextual data to produce a single contextual output in particular circumstances, such as upon a calendar event becoming current, or upon the user's current location matching a favourite or pre-specified location of the user. The user may be able to specify under what conditions the apparatus performs the collation in a preferences menu or similar.
The apparatus may be configured to create a new single contextual output based on a previously stored single contextual output if the current and previous user contexts are similar according to a predetermined similarity criterion. For example, a user may cycle along the same route to work each day, or the user may visit family in the same location regularly. A new instance of the user being in such a context may be matched to a previous instance by, for example, determining that the location, activity, time of day and/or other contacts with the user are the same. Thus the apparatus may, for example, use a similar format of single contextual output that the user is happy with for similar contexts, but may update, for example, the time, date, weather conditions and arrangement of images within the single contextual output to reflect the current context/conditions.
The apparatus may be configured to prompt the user to at least one of: confirm starting the collation, confirm selection of the contextual content data items to use in the collation, and transmit the single contextual output to the third party device.
The apparatus may be configured to transmit the single contextual output to the third party device having a particular predefined categorisation. For example, if the apparatus is configured to send single contextual output as an automatic response to incoming messages, the single contextual output may be automatically sent to contacts labelled as friends and family, but not to business colleagues (who may receive a standard "out of office" type response to an e-mail or may be connected to the user's answering machine,
for example).
The apparatus may be configured to transmit the single contextual output to the third party device as a response to a particular type of message received from a third party. For example, the single contextual output may be sent in response to an incoming call if the user's current environment is noisy, and the single contextual output may be sent as a response to an incoming text-based message if the user's hands are not currently free (for example, if the user is determined to be driving based on the device being put into a hands-free/driving mode).
The apparatus may be configured to provide the single contextual output to the third party device by one or more of: e-mail; SMS; MMS; social media; upload to a website; near field communication; Bluetooth, other wireless communication, and wired communication.
The apparatus may be one or more of: a portable electronic device, a mobile phone, a smartphone, a tablet computer, a surface computer, a laptop computer, a personal digital assistant, a graphics tablet, a pen-based computer, a digital camera, a watch, a non-portable electronic device, a desktop computer, a monitor/display, a household appliance, a server, or a module for one or more of the same.
According to a further example there is provided a computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on a determined current context of a user of an electronic device, collate a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device.
A computer program may be stored on a storage media (e.g. on a CD, a DVD, a memory stick or other non-transitory medium). A computer program may be configured to run on a device or apparatus as an application. An application may be run by a device or apparatus via an operating system. A computer program may torm part of a computer program product. Corresponding computer programs for implementing one or more of the methods disclosed are also within the present disclosure and encompassed by one or more of the
described examples.
According to a further example, there is provided a method, the method comprising: based on a determined current context of a user of an electronic device, collating together a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device.
According to a further example there is provided an apparatus comprising: means for collating together a plurality of contextual content data associated with a determined current context of a user of an electronic device, for provision as a single contextual output to a third party device based on a determined current context of the user.
The present disclosure includes one or more corresponding aspects, examples or features in isolation or in various combinations whether or not specifically stated (including claimed) in that combination or in isolation. Corresponding means and corresponding function units (e.g., current context determiner, contextual content data collator, single contextual output provider, metadata matcher, user prompter) for performing one or more of the discussed functions are also within the present disclosure.
The above summary is intended to be merely exemplary and non-limiting.
Brief Description of the Figures
A description is now given, by way of example only, with reference to the accompanying drawings, in which: figure 1 illustrates an example apparatus comprising a number of electronic components, including memory and a processor, according to one example of the present disclosure; figure 2 illustrates an example apparatus comprising a number of electronic components, including memory, a processor and a communication unit, according to another example
of the present disclosure;
figure 3 illustrates an example apparatus comprising a number of electronic components, including memory and a processor, according to another example of the present
disclosure;
B
figures 4a-4c illustrate an example of collating a plurality of pre-stored contextual content data associated with the user's determined current context for provision as a single contextual output, according to examples of the present disclosure; figures 5a-5b illustrate another example of collating a plurality of current (and pre-stored) contextual content data associated with the user's determined current context for provision as a single contextual output, according to examples of the present disclosure; figures 6a-6b illustrate example single contextual outputs comprising image, audio and movie content, according to examples of the present disclosure; figures la-lb illustrate an example of transmitting a single contextual output as a response to an incoming telephone call, and to a received e-mail, according to examples
of the present disclosure;
figure 8 illustrates an example of transmitting a single contextual output as a response to a particular category of contact, according to examples of the present disclosure; figures 9a-9b illustrate examples of presenting a user with an option of transmitting a single contextual output to a particular person or group, according to examples of the
present disclosure;
figures ba-lOb each illustrate an apparatus in communication with a remote computing element; figure 11 illustrates a flowchart according to an example method of the present disclosure; and figure 12 illustrates schematically a computer readable medium providing a program.
Description of Example Aspects
A user may wish to send a message to a contact describing what they are currently doing.
For example, a user may be currently participating in an event and wish to tell friends about the event. The event may be a sporting event, a family event, a social event, or a business event, for example. In another example, the user may not wish to be disturbed and may wish for his/her apparatus/device to send an automated reply to a third party who is trying to contact the user (e.g., by telephone or by sending an electronic message), so that the third party knows why the user is not responding. In some examples the user may wish to send the information by, for example, e-mail, SMS, or by posting a message using a social media application.
It may be possible to obtain different items of information relating to a user's current situation. For example, a personal electronic device may be capable of receiving information about the current date and time, the weather, the temperature, the current location, any music files currently or recently played, any recently visited locations, and other information. A personal electronic device may also be capable of accessing previously stored images (e.g., photographs) and/or capturing new images using a camera. A user may wish to provide some of this information to a third party in an easily understandable format, as an update of the user's current situation/context. A user may wish to obtain such information and collate it in a neat, visually appealing way, as a "snapshot" of his/her current activity.
Examples discussed herein may be considered to, based on a determined current context of a user of an electronic device, collate a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device. That is, the current context of a user may be determined, and based on the user's context, a plurality of associated contextual content data may be collated as a single contextual output. This output may be used, for example, to respond to an incoming communication from a third party, used as a social media post by the user, or simply stored by the user as a reminder of a particular occasion.
Other examples depicted in the figures have been provided with reference numerals that correspond to similar features of earlier described examples. For example, feature number 100 can also correspond to numbers 200, 300 etc. These numbered features may appear in the figures but may not have been directly referred to within the description of these particular examples. These have still been provided in the figures to aid understanding of the further examples, particularly in relation to the features of similar earlier described examples.
Figure 1 shows an apparatus 100 comprising memory 107, a processor 108, input I and output 0. In this example only one processor and one memory are shown but it will be appreciated that other examples may utilise more than one processor and/or more than one memory (e.g. same or different processor/memory types).
In this example the apparatus 100 is an Application Specific Integrated Circuit (ASIC) for a portable electronic device with a touch sensitive display. In other examples the apparatus can be a module for such a device, or may be the device itself, wherein the processor 108 is a general purpose CPU of the device and the memory 107 is general purpose memory comprised by the device. The display, in other examples, may not be touch sensitive.
The input I allows for receipt of signalling to the apparatus 100 from further components, such as components of a portable electronic device (like a touch-sensitive or hover-sensitive display) or the like. The output 0 allows for onward provision of signalling from within the apparatus 100 to further components such as a display screen, speaker, or vibration module. In this example the input I and output 0 are part of a connection bus that allows for connection of the apparatus 100 to further components.
The processor 108 is a general purpose processor dedicated to executing/processing information received via the input I in accordance with instructions stored in the form of computer program code on the memory 107. The output signalling generated by such operations from the processor 108 is provided onwards to further components via the output 0.
The memory 107 (not necessarily a single memory unit) is a computer readable medium (solid state memory in this example, but may be other types of memory such as a hard drive, ROM, RAM, Flash or the like) that stores computer program code. This computer program code stores instructions that are executable by the processor 108, when the program code is run on the processor 108. The internal connections between the memory 107 and the processor 108 can be understood to, in one or more examples, provide an active coupling between the processor 108 and the memory 107 to allow the processor 108 to access the computer program code stored on the memory 107.
In this example the input I, output 0. processor 108 and memory 107 are all electrically connected to one another internally to allow for electrical communication between the respective components I, 0, 107, 108. In this example the components are all located proximate to one another so as to be formed together as an ASIC, in other words, so as to be integrated together as a single chip/circuit that can be installed into an electronic device. In other examples one or more or all of the components may be located separately from one another.
Figure 2 depicts an apparatus 200 of a further example, such as a mobile phone. In other examples, the apparatus 200 may comprise a module for a mobile phone (or PDA or audio/video player), and may just comprise a suitably configured memory 207 and processor 208.
The example of figure 2 comprises a display device 204 such as, for example, a liquid crystal display (LCD), e-lnk or touch-screen user interface. The apparatus 200 of figure 2 is configured such that it may receive, include, and/or otherwise access data. For example, this example 200 comprises a communications unit 203, such as a receiver, transmitter, and/or transceiver, in communication with an antenna 202 for connecting to a wireless network and/or a pod (not shown) for accepting a physical connection to a network, such that data may be received via one or more types of networks. This example comprises a memory 207 that stores data, possibly after being received via antenna 202 or port or after being generated at the user interface 205. The processor 208 may receive data from the user interface 205, from the memory 207, or from the communication unit 203. It will be appreciated that, in certain examples, the display device 204 may incorporate the user interface 205. Regardless of the origin of the data, these data may be outputted to a user of apparatus 200 via the display device 204, and/or any other output devices provided with apparatus. The processor 208 may also store the data for later use in the memory 207. The memory 207 may store computer program code and/or applications which may be used to instruct/enable the processor 208 to perform functions (e.g. read, write, delete, edit or process data).
Figure 3 depicts a further example of an electronic device 300 comprising the apparatus of figure 1. The apparatus 100 can be provided as a module for device 300, or even as a processor/memory for the device 300 or a processor/memory for a module for such a device 300. The device 300 comprises a processor 308 and a storage medium 307, which are connected (e.g. electrically and/or wirelessly) by a data bus 380. This data bus 380 can provide an active coupling between the processor 308 and the storage medium 307 to allow the processor 308 to access the computer program code. It will be appreciated that the components (e.g. memory, processor) of the device/apparatus may be linked via cloud computing architecture. For example, the storage device may be a remote server accessed via the internet by the processor.
The apparatus 100 in figure 3 is connected (e.g. electrically and/or wirelessly) to an input/output interface 370 that receives the output from the apparatus 100 and transmits this to the device 300 via data bus 380. Interface 370 can be connected via the data bus 380 to a display 304 (touch-sensitive or otherwise) that provides information from the apparatus 100 to a user. Display 304 can be part of the device 300 or can be separate.
The device 300 also comprises a processor 308 configured for general control of the apparatus 100 as well as the device 300 by providing signalling to, and receiving signalling from, other device components to manage their operation.
The storage medium 307 is configured to store computer code configured to perform, control or enable the operation of the apparatus 100. The storage medium 307 may be configured to store settings for the other device components. The processor 308 may access the storage medium 307 to retrieve the component settings in order to manage the operation of the other device components. The storage medium 307 may be a temporary storage medium such as a volatile random access memory. The storage medium 307 may also be a permanent storage medium such as a hard disk drive, a flash memory, a remote server (such as cloud storage) or a non-volatile random access memory. The storage medium 307 could be composed of different combinations of the same or different memory types.
Figures 4a-4c illustrate an example of collation of a plurality of contextual content data associated with a users determined current context by an apparatus/device 400. The collation is made based on a determined current context of the user. The collated contextual content data can be provided as a single contextual output, for example for transmission to a third party device.
The user has an apparatus/device 400 which can access various functionalities. In this example, the apparatus/device 400 can be used to play music, compose textual messages, is GPS enabled to determine the user's current location, and can access photographs and other stored content. The user is currently cycling to work along a pre-planned route while listening to music using his apparatus/device 400. The user usually cycles to work along this particular route, thus the user's current context may be considered to reflect the user's habits.
Figure 4a shows that the apparatus/device 400 collates image data and other data, both relating to the user's current context, for provision as a single contextual output 450 shown in figure 4b. The collated image data in this example are photographs 404 which the user has previously taken along his cycling route, a photograph of the user 406 which has been previously taken, and a symbol of a bicycle 408 (which is associated with the user's current context of cycling). The bicycle symbol data 408 relates to the user's current activity. Data which has been previously stored, such as photographs or text, may be stored at the apparatus/device 400 or may be stored remotely, for example at a server or cloud, and may be accessed by the apparatus/device 400.
The current date and time 410 are included in the collation. A map and the name of the pre-planned cycling route (Keilalahti -Alppiharju") 412 which the user is currently cycling along are included for collation and may be obtained from a navigation application. This data 412 may be considered to be a current location of the user. The last few music tracks 414 which the user listened to whilst cycling along the route are also included for collation.
Also, the user pre-composed some lines of text 416 relating to cycling which is also collated with the other data.
The collated data 404, 406, 408, 41 412, 414, 416 is provided by the apparatus/device 400 as a single contextual output 450 as shown in figure 4b. This single contextual output 450 may be considered to be a composite image formed from the plurality of contextual content data 404, 406, 408, 410, 412, 414, 416 which is annotated with text 410, 412, 414, 416; 472, 474, 476, 466, 468, 470.
The single contextual output 450 in this example is a postcard-style montage including the collated data. Thus the output includes: photographs 452, 454 related to the user's cycling route; a photograph of the user 456; a cycling symbol 458; a labelled route map 460 indicating 462, 464 where along the route the photographs 452, 454 were captured; a list of recently played songs 466; two textual messages, "High-speed warning!" 468 and "Can't chat now, I've got to go faster 470 which the user previously composed; the name of the user's route 472; the current date 474 and current time 476. The single contextual output 450 in this example combines pre-stored image data, pre-stored textual messages, pre-stored navigation/map data, and current data from a music player application and from a clock.
Figure 4c illustrates another example of single contextual output 480 created by the apparatus/device 400 using collated current contextual data associated with the user's determined current context of cycling. In this example the user's average speed 482 and top speed 484 are included in the montage along with other data as in figure 4b.
The apparatus/device 400 in this example is configured to collate at least some of the contextual content data by matching metadata for respective pre-stored contextual content data with the determined current context of the user. The metadata of the content items in this example comprises metadata labels associated with pre-stored contextual content data and words included within contextual text.
The current context of the user (i.e., cycling a known route) may be associated with the metadata terms "cycling", "bike", "ride", "speed", "Keilalahti" and "Alppiharju" for example.
The apparatus 400 is configured to search memory available to the apparatus/device 400 foi content items having matching metadata. Thus in this example, the pie-stored images 454, 456 are labelled with the metadata tag cycling" because these relate to cycling along a route, another pie-stored image 452 is labelled with the metadata tag "Keilalahti" because it was taken in that location, and the text 468 recites the metadata speed". Thus these content items are identified, through metadata matching, as relating to the user's current context and are thus included in the single contextual output 450.
Certain content items having metadata matching the user's current context may be associated with other particular content items which may not have metadata directly matching the user's current context. If one content item having matching metadata is identified for collation with another content item linked with the identified content item, this linked item may also be included in the collation. In this example the text 468 "High-speed warning!" has metadata ("speed") which matches the user's current context. It is linked to the text 470 "Can't chat now I've got to go faster which does not have matching metadata but is linked to text which does. Thus both pre-stored text items 468, 470 may be collated for inclusion in the single contextual output 450. That is, some content items may be linked so one content item matching the current context causes other content items to be indirectly linked. Example types of metadata label include a person's name, a place name, an event name (e.g., birthday), a song title, an album title, an artist name, a date, and a time.
The apparatus/device 400 may be configured to determine the current context of a user using a user input indication of the current context. In this example, the user may have activated an "out-of-office" or "unavailable" mode on his apparatus/device 400 just prior to cycling because he knows he will be unable to respond to incoming calls and messages while cycling. Thus the user has provided an "out-of-office/unavailable" current context input indication so the apparatus 400 recognises that the user is unavailable for contact.
The user's selection of an "out-of-office/unavailable" mode also acts as a prompt to the apparatus 400 to determine the current context of the user. In other examples, the user may provide a particular current user activity context input indication, for example, selecting "I am cycling" from an activity tracker application so that the apparatus/device 400 determines the user is currently cycling.
The apparatus/device 400 may be configured to determine the current context of a user using an automatic determination of a current activity being performed by the user. Thus the user need not necessarily provide a particular input so that apparatus 400 is aware of the user's current context. The automatic determination may comprise determination of the current activity by the electronic device and/or by an apparatus associated with the electronic device. In this example, it may be that the apparatus can access the user's pre-stored cycling route 412, 460 and can determine the user's current location and speed using a GPS device either integrated within the apparatus/device 400 or accessible by the apparatus/device 400. Thus the apparatus/device 400 may determine if the user is located on a cycling route which is used by the user, and that the user is travelling a speed consistent with cycling. From this information the apparatus 400 may determine that the user is currently cycling. This example may also be considered to be an example of the automatic determination of the user's context using motion characteristics (namely, the user's current direction).
In some examples, the apparatus 400 may be able to detect variations within a particular context, Therefore if the apparatus 400 determined the user is cycling to work (for example, from the user's current travelling position and the time of day), different contextual current data may be collated than if the user is cycling back home, for example.
The apparatus/device 400 may provide the user with a prompt to confirm that it has detected the user's current context correctly.
In some examples the user may be able to edit the content of the single contextual output 450, 480 to personally customise it, for example by composing new text, re-arranging the images, or adding or deleting specific contextual content, for example.
In some examples, the apparatus may prompt the user before transmitting the single contextual output to a third party device. For example, the apparatus may provide the single contextual output to the user (e.g., by displaying a preview on a screen of the user's electronic device) and ask the user if he/she wishes to, for example, post it to a social media site, attach it to an outgoing electronic message, or e-mail it to a group of contacts, for example. In the example of figures 4a-4c, upon completing the cycling route (or during the ride) the apparatus may, after approval/confirmation by the user, post the single contextual output image 450, 480 to a social media webpage, for example. In this way the user can easily update his/her social media presence using visually appealing and personalised images/output which are relevant to his/her current activities. This may require little effort from the user due to the apparatus collating a plurality of relevant content automatically based on a determination of the user's current context.
In some examples the apparatus may be configured to automatically transmit a single contextual output 450, 480 to a third party device. For example, in relation to figures 4a-4c, upon completing the cycling route (or during a pause along the route) the apparatus 400 may automatically post the single contextual output image 450, 480 on the user's website or blog. This may be considered an example of the apparatus automatically transmitting a single contextual output according to the user's habits. For example, each time the user completes a bike ride or other activity, or each time the user visits a particular location (such as a favourite café) then the apparatus may automatically upload a single contextual output to the user's personal webpage as a type of virtual diary entry, for
example.
In other examples, past contextual information may be retrieved and used in relation to the user's current context. Tagged data such as application files (e.g., photographs, movies, documents, calendar entries, maps) and device usage information may be used with habitual information (e.g., each Wednesday evening the user plays football, each Sunday morning the user goes to church with a group of friends) to create a single contextual output in relation to a current context. For example, current contextual data collated for a user on a Sunday morning may include photographs of the user's church and church friends, and a bible quote, for example. This information is determined based on the user usually visiting church on a Sunday morning, and using information captured at that time on previous Sundays as well as information sourced from a reference (such as an online Bible). As another example, a user may be able to specify a timeframe and data items captured or related to events in that timeframe which match the user's current context may be collated for use in a single contextual output.
Crowd-sourced information about the user's past behaviour and/or information may be used to predict the user's current context. Such predictions may be used by the apparatus as an indication of the user's current context. Crowd-sourced information may include trending or habitual behaviour in a specific context obtained from the "cloud" or internet, for example. For example, many people who are located at Notting Hill in London on a particular date may have devices which provide information to the "cloud", for example through social media updates, to show that they are celebrating at the Notting Hill Carnival.
If the user is also located in that place at that time, he/she may also be celebrating the Carnival. Thus the user's current context may be determined to be "celebrating at Notting Hill Carnival" through comparison of the user's location with that of other people in a similar context who are providing information to the "cloud".
Contextual content items may also be obtained using crowd-sourced information. Thus, in the example above, the apparatus may be able to identify stock photos of the Carnival from the interneti'cloud", prepare text such as "Carnival time!", and identify a Carnival themed song for use in a single contextual output for the user. Text may be identified from currently trending keywords relevant to the user's current context or use in a single contextual output, such as "#carnivaltime" used on a social media site by people currently located close to the user.
In another example, the output single contextual 450, 480 may be used as an auto-reply for any telephone calls which were received by the user's device during the user's travel.
Thus if the user is cycling and cannot answer a telephone call or electronic communication (e.g., e-mail, SMS message), the apparatus/device 400 may automatically send the output 450, 480 as a response to an incoming message so that the third party knows why the user cannot respond to their communication. The third party thus receives an informative, personal, and readily understandable response from the user.
Another example of automatic determination of a user's context using motion characteristics is of an apparatus/device in communication with a pedometer comprising a gyroscope. For example, a user may enjoy running. If the pedometer detects motion consistent with a user jogging, then this information may be transmitted to the apparatus/device to indicate the current user's context as jogging. The apparatus/device may, upon determining that the user is jogging, collate a plurality of contextual content data associated with jogging, such as an image of the user running, an image of the user's current location which they are jogging past, the current weather conditions, the user's average speed and the distance they have run so far.
The apparatus may be configured to determine the current context of a user using an automatic determination of a current user calendar event data. The determination may be made by the user's electronic device or an apparatus associated with the user's electronic device. For example, a user may have a calendar entry stored for 8pm on 1 July stating "Dinner with Julie at Mario's". At 8pm, on 1 July, the apparatus may collate contextual content data such as the user's current location (Mario's restaurant), photographs of the user and her friend Julie, the current time and date, and a contextually appropriate textual message, which may have been prepared by the user (e.g.,"Can't talk, I'm eating!").
The apparatus may be able to use current sensor data for comparison with calendar data as a check that the user is keeping her calendar appointment. The apparatus may compare the user's currently detected location via GPS or similar with the location noted in the calendar entry, or compare the locations of the user and of her friend Julie and check they are in the same location, for example. Examples of sensor data which may be used to determine a user's current context include: a gyroscope, an accelerometer, a compass, a barometer, a proximity sensor, a microphone, an ambient luminosity detector, a camera, a touch sensitive screen, a temperature sensor/thermometer, a GPS system, Wi-Fl circuitry, Bluetooth circuitry, NFC circuitry, a heart rate meter, a blood pressure meter, a humidity sensor, a pollution/air quality sensor, a radioactivity sensor, and a radar system. The sensor(s) may form part of the user's electronic device, or may be in communication with the users electronic device as an accessory/peripheral device.
The combination of data obtained from two or more such sensors may provide contextual content data. For example, an elevated blood pressure reading combined with an acceleiometer reading indicating the user is not moving may indicate the user is experiencing stress. Detection of an elevated heart rate and motion indicating that the user is travelling at 25 km/h may indicate that the user is cycling (as opposed to travelling by car or train, for example).
The apparatus may be configured to connect to one or more other devices and obtain contextual content data from them. For example, the apparatus may be configured to receive information indicating friends who are currently with the user (for example, by identifying nearby Bluetooth-enabled devices having pre-stored Bluetooth identifiers matching those of devices of the user's friends recorded in the user's contact list). As another example, the apparatus may receive information from an external vehicle such as a car (for example, information relating to the user's current speed may be received to indicate whether the user is travelling in his car at high speed or is stationary, for example because he is stuck in a traffic jam). As another example, information may be obtained from household appliances of the user to indicate e.g., what television program the user is currently watching, or what cooking appliances are currently activated while the user is at home (thereby indicating the user is currently cooking).
In some examples, one or more items of contextual content data may be obtained from existing contextual information. For example, the apparatus may be able to obtain habitual information indicating, for example, when the user is usually at home, and what personal preferences, activities and context the user usually has at home. For example, a user may usually be at work from Monday to Friday, and usually be at home on Saturday and Sunday with his/her family and children. This habitual context data may be used to determine and/or confirm the user's current context and identify relevant contextual content.
In some examples, the single contextual output may comprise pre-stored collated content data. The pie-stored collated content data may be stored pnor to the determination of the current context of the user. For example, a user may regularly sit examinations. The user's device may have access to a pre-prepared single contextual output relating to examinations (for example, including images of books and certificates, and text such as "Exam time again). This single contextual output may be pre-stored and available for use upon the apparatus determining the user's current context to be that he is sitting an examination (for example, the apparatus may use a calendar entry noting the examination or the apparatus may determine that the user's current location is a building which is known to be an examination venue). The user may then, for example, leave his apparatus/device outside the exam hall and any incoming calls or messages received during the examination period may be automatically responded to with a reply comprising the pre-prepared single contextual output. In some examples the single contextual output may be annotated with the current date and time in addition to the pre-stored collated content.
The apparatus may be configured to check the determined current context against a predetermined number of contexts. Based on matching the determined current context against one or more of the predetermined number of contexts, the apparatus may collate the plurality of contextual content data associated with the user's determined current context for provision as the single contextual output. For example, the user may not wish the apparatus to collate contextual content data upon determination of each change in user context (for example, upon each change of location, occurrence of each calendar entry, upon each time a social media application is used, and/or upon each different type of motion of the user). Thus the user may specify, for example, that the apparatus should only collate contextual content data associated with a particular determined current contexts, such as upon changing location to a different country (but not a different town), and upon calendar events associated with contacts tagged as favourites, but not calendar events associated with any non-favourite contacts. In this way the user can control when collation is performed in relation to particular events/contexts.
In some examples the apparatus may prompt the user to confirm starting collation, so that the user can control the apparatus' activity in relation to collating current contextual data.
For example, if the user device's battery is low, he may not wish to use remaining battery power to perform collation. In some examples the apparatus may prompt the user to confirm selection of contextual content data items to use in collation. For example, the apparatus may present a menu of available contextual content which it has identified as being relevant to the user's current context. The user may be able to select his/her preferences for content for inclusion in the single contextual output and the apparatus may provide the single contextual output based on the user's choice of content.
Figures 5a-5b illustrate an example of collation of a plurality of contextual content data associated with a user's determined current context by an apparatus 500. The collation is made based on the determined current context of the user. The collated contextual content data is provided as a single contextual output 550. The current context in this example may be considered to be the user's current activity of snowboarding, the user's current location (Grenoble, France), and the user's current environment (e.g., picturesque and quiet).
The user in this example has an apparatus/device 500 with various functionalities. In this example, the apparatus/device 500 can be used to play music, take photographs using front and rear-facing cameras, and is GPS enabled to determine the user's current location. The user is currently on a snowboarding holiday. The user has snowboarded down a mountain and arrived at a beautiful hidden location with fresh snow, and she really wants to tell her friends about it.
In this example, the user uses her apparatus/device 500 to take a photograph of herself using the front-facing camera, and a photograph of the beautiful scenery with the rear-facing camera. This action of taking photographs acts as a trigger for the apparatus to collate the just-captured photographic data 504 with other data associated with the user's current context. In this example the other data includes the current weather conditions 506, the current date and time 508, the current location 510, a textual message composed by the user 512, the last music tracks which the user listened to while snowboarding 514, and an icon of a snowboarder (which may be considered to be activity data 516 as it indicates the user's current activity of snowboarding). The current context data is collated 518 to produce a single contextual output 550.
The apparatus/device 500 may be considered to allow user input of new data (that is, the newly captured photographs 504, 552, 554) to be added to the collated contextual content data for provision as the single contextual output to the third party device. That is, the apparatus/device 500 may have collated the user's current context data of the location, time, data, weather, music output, and current activity. Upon recording the new photographs the apparatusldevice 500 includes these photographs with the other collated data to form the single contextual output.
The collated data 504, 56, 508, 510, 512, 514, 516 is provided by the apparatus/device 500 as a single contextual output 550 as shown in figure 5b. The single contextual output 550 in this example is a postcard-style montage including the collated data. Thus the output includes: the newly-captured photographs 552, 554; a snowboarding icon 556, the textual message "Per-fect!" 558, the current weather conditions 560 including a written description and a symbol, the last listened-to music 562, the user's current location 564, the current date 566 and time 568. The single contextual output in this example combines newly captured image data 552, 554, pre-stored image data 556, a pre-stored textual message 558, current data from a music player application 564, a weather application 560, a location determination device 564 and a clock/calendar 566, 568.
The apparatus/device 500 may be configured to operate in a "share" mode such that, when the user captures the new photographs 504, 552, 554, the apparatusldevice 500 automatically collated the new photograph(s) with other content items related to the user's current context and automatically posts the collated single contextual output, for example, to the user's social media page for her friends to see. In some examples the user may be able to add a personal textual message for inclusion in the single contextual output, such as "Per-fect!". The user can therefore quickly and easily tell a story" about her current activities in a personal and appealing way by posting a personalised and relevant composite output for her contacts/friends to see.
The single contextual output 450, 480, 550 may create a rich, informative, collage-like visualisation of the user's current experience in a single image/output. The photographs may be mixed together in the output using visual effects such as transparency to create a visually pleasing effect.
The single contextual output need not be a single composite image (annotated or otherwise). In some examples the single contextual output may comprise a (composite) image with an audio file, formed from image content data and audio content data; and/or (composite) image with a movie file, formed from image content data and movie content data.
The example of Figure 6a relates to a user walking through a forest. Figure 6a illustrates a single contextual output 600 comprising a single image 602 of a forest, a schematic diagram of a route 604 walked by the user through the forest, a textual message composed by the user 606, and the name of a song 608 relating to the user's current context. In this example, the song 608 The Cure -A Forest" is not (necessarily) the last song listened to by the user, but is contextually linked (by the song title) to the user's current context of walking through a forest. In this example, the single contextual output 600 also comprises audio content data. For example, the song displayed in the single contextual output 608 may play, or sound effects of a person walking through a forest may play, for example. Thus, for example, if the single contextual output was attached to an e-mail message, upon the recipient opening the attachment, the image would be displayed and the song would play.
The example of figure 6b relates to a user who has just won a gold medal at a diving competition. Figure 6b illustrates a single contextual output 650 comprising a single image 652 of a gold medal, and two textual messages "Gold medial dive!" 654 and "London school championships' 656. The textual messages may be composed by the user, or may be retrieved based on, for example, a calendar entry ot the user or based on the location of the user and a website indicating the event taking place at the location at the current time. In this example, the single contextual output 650 also comprises movie content data 658. In this example the movie 658 was recorded by a friend of the user at the diving event and records the user diving into the pool. Movie content data 658 may have associated sound output in some examples when the single contextual output 650 is viewed, the movie may automatically play (or may play if the viewer chooses a play" option). Thus the single contextual output may be an annotated image, a composite image, andlor an image with associated sound and/or movie content (which may be a clip of a larger movie). In this example the current context of the user may be considered to comprise a milestone associated with the user (namely, winning a gold medal). Other "milestone" contexts include a user's birthday, anniversary, or other personal celebration, or a celebration shared with others such as Christmas or Easter, for example.
Textual content for inclusion in a single contextual output may, in some examples, be automatically generated based on the identified contextual data relating to the user's current context. For example, if the apparatus receives information that the user's current location is near Mont Blanc and the user's current altitude is increasing, a textual message may be automatically generated such as "Coming to getcha Mont Blanc!" because it has been determined that the user is climbing Mont Blanc. As another example, if the user's speed is determined to be above a particular threshold, the automatic text "high speed warning! may be generated for use in a single contextual output. Another example may be the automatic generation of the textual message "Quiet country life" if the user's location is determined to be in the countryside (for example, from a map application indicating that there is a low population density at the user's current location) and the microphone detects low noise levels.
In some examples, the user's current context may be related to an extended time period relating to the particular activity the user is currently engaged in. In such an example, contextual current data may be related to the user's current context and also to related events from the extended time period activity having a similar context to the user's current context. For example, a user may usually be resident in Nottingham but for the last month has been trekking in South East Asia. Current contextual information may include photographs taken from the trekking holiday over the last month as well as more recent/current photographs, and may include textual content (such as great times with Peter, John and Maya") relating to new contacts (in this example, fellow travellers called Peter, John and Maya) which the user has recorded over the last month.
In such examples, the single contextual output may be presented in a timeline format to indicate the user's recent and current activity having a particular context. Earlier photographs of the user's travels may be presented on the left of the single contextual output image and current photographs may be presented on the right, for example. Other examples of curient contextual data relating to curient events and contextually ielated events over a longer time period include an image collage of photographs (e.g., taken during a summer break), a colour coded map of visited places (such as cities visited in a holiday), or images of friends who the user has been with during a particular contextual event (such as fellow travellers).
In some examples, the contextual content data to be included in a single contextual output may be based on available information to predict an appropriate response, for example based on data relating to past events and application information. For example, John is driving a car after work to meet his friend Kate. His meeting with Kate may be stored as data available to the apparatus through either behaviour prediction (e.g., John regularly drives to Kate's house at the same time each day after work) or from an event stored in a calendar application (e.g., John is stuck in a traffic jam and will be late to the meeting). If Kate calls John whilst he is driving, an automatic reply may be sent back to Kate comprising a single contextual output including, for example, a map of John's route from work (his past location) to Kate's house, an automatically generated textual message "Stuck in traffic, I'll be 15 minutes late' as determined from his current speed and route information from a map application, and images of John and Kate together (determined from his predicted context of going to visit Kate). Of course, many other contextual content items may be included (for example, the current time and weather conditions, or other information based on past/habitual events, such as John and Kate usually going out for a Chinese meal on Wednesday nights).
In some examples, the apparatus may be configured such that an operating system or an application running on the user's device is configured to determine a change in context and provide this determination to the apparatus. The operating system/ application determining the change in context may determine appropriate contextual context to include in the single contextual output. In some examples a computer program running on the user's device may be configured to have learning properties such that it collates information about a user's current context (periodically, or upon a change in context being detected) and uses the collated information as an indication of past habits and user preferences in order to provide more personal single contextual outputs for a user.
The apparatus may be configured to use information stored in one or more applications accessible by the user's electronic device to obtain contextual content data. Examples of such applications are: a map/location application, a personal activity tracker (e.g., a sports application, an event planner such as a wedding or party planner, or weight loss application), a weather application, an alarm clock application, a calendar application, a music player application, and a movie player application.
Figures 7a-7b illustrate an example of transmitting a single contextual output from a user apparatus/device 700 as a response to a particular type of third party communication based on the user's current context. In this example the user is at a concert and the music is very loud. Thus the used cannot easily talk or hear in a telephone call, but may be able to respond to a text based communication. In this example, the apparatus/device 700 is configured to collate together a plurality of contextual content data relation to the user's current context (that is, attending a concert) to create a single contextual output as an automatic response to an incoming communication.
In figure 7a, the user's apparatus/device 700 receives an incoming telephone call from a third party 702. The apparatus/device 700 is configured to send an automatic response to the caller 708 comprising the single contextual output in the event of an incoming telephone call if the apparatus/device 700 determines the sound levels to be above a predetermined threshold, such that the user could not easily take the incoming call. The apparatus may be considered to be in a "contextual auto-reply" mode for incoming telephone calls if the sound levels are high. The single contextual output may be transmitted as an MMS, for example, if the incoming call is made using a mobile telephone, or, if the incoming call is made by a contact for whom the user has an email address, the single contextual image may be transmitted as an e-mail attachment. In some examples the user may be presented with an option of choosing by which method they wish to send the single contextual output. In other examples the apparatus 700 may automatically select, for example, the cheapest available method for automatically transmitting the single contextual output.
In figure 7b, the user's apparatus/device 700 receives an e-mail from a third party 752.
Again, the sound levels are detected as being high 754 due to the concert. However, because the incoming communication 752 is not a telephone call, responding to the communication does not require the user to be able to hear the incoming message. The apparatus/device 700 in this case does not automatically send a single contextual output as a response to the incoming message 752. The apparatus may be considered to be out of a "contextual auto-reply" mode in relation to incoming communications which do not require the user to hear the message (such as e-mails, SMS, non-audio MMS or social media updates). In this example, the apparatus/device 700 may prompt the user 758 that a single contextual output can be sent as a response if the user wishes to (for example, so they are not distracted and miss part of the concert). In another example, the apparatus may simply not take any action 758 if a written/visual communication is received, so the user can choose to reply on the spot or wait until a later time to respond.
Another example is of a user who is currently driving. The user may be able to accept incoming telephone calls using a hands-free kit to speak to the caller, but the user cannot respond to a text-based message whilst driving because his/her hands are not free. The user's apparatus/device may be configured to determine that the user is currently driving (for example, based on a current determined location motion type and/or speed). If the user receives an incoming telephone call, he/she may be able to answer the call and speak to the caller. If the user receives an incoming text-based message, the apparatus/device may automatically send a single contextual output to the third party sender because the user cannot respond. In other examples, the apparatus/device may sound an alert to the user indicating that a message has been received and that a single contextual output may be transmitted back as a response. The user may then be able to recite a command for example, which instructs the apparatus to send a single contextual output as a response to the incoming text-based communication.
In some examples, upon determining that the user's current context has changed, the apparatus may be configured to provide a prompt to the user of any automatic single contextual output which has been sent before the change in context. For example, a user may have been at a loud concert and so automatic single contextual outputs have been sent in response to incoming telephone calls while at the concert. Upon determining that the context has changed (for example, the microphone detects lower noise levels) the user's device displays a prompt to the user to indicate, for example, what single contextual outputs have been transmitted and which missed calls the outputs have been sent in response to.
In some examples a user may be able to specify that a single contextual output is sent as an automatic reply, and/or as an automatic social media update, via a particular application or by specifying a hierarchy of applications. For example a user may be able to determine which of a plurality of available e-mail addresses a single contextual output is sent from. As another example, the user may specify that an automatic social media status update should be sent to a particular group of contacts. The user may use several social media applications (for example, SM1, SM2 and SM3) and e-mail. Upon determination that a single contextual output should be sent to the group, firstly the output is sent using the SM1 application to all contacts in the group having an account with that application. Those contacts that do not have an SM1 account, but do have an SM2 account will receive the output to that SM2 account. Those contacts that do not have an SM1 or SM2 account, but do have an SM3 account, will receive the output to that SM3 account. Any remaining contacts who do not have an SM1, SM2 or SM3 account but do have e-mail will receive the output as an attachment to an email.
Figure 8 illustrates an example of transmitting a single contextual output from a user apparatus/device 800 as a response to a contact entry having a particular categorisation.
For example, the user's apparatus/device may have access to one or more contact lists/address books. Each contact may be categorised into one or more contacts (such as family, friends, business contact, social media, services, personal, and place of business for example). The user may want the apparatus/device 800 to (automatically) transmit single contextual outputs as responses to some categories of contact but not others. For example, the user may not want to send single contextual outputs as responses to communications from service providers (e.g., doctor, plumber, accountant) or to places of business (e.g., work office, child's nursery). In this example the apparatus/device 800 is configured to transmit a single contextual output to a third party device having a particular predefined categorisation.
In figure 8, the user receives an incoming call 802 to his apparatus/device 800. The user is currently located at a place with high noise levels. Thus the apparatus 800 determines that the user's current context is "noisy" via a microphone 804, for example, and is configured to automatically reply to incoming calls from friends with a single contextual output 806. The single contextual output will only be automatically sent to contacts categorised as friends in the event of an incoming call from a friend being received. Thus the apparatus/device 800 checks if the incoming call was made by a contact categorised as a friend 808. If so, then a single contextual output is sent to the caller as an automatic response 810. If not (e.g., the call was made by a business colleague), then the apparatus does not send a single contextual output 812 and may, for example, let the call pass to an answeiing machine service 814. In this way, the user can personalise how single contextual outputs are provided to certain contacts as automatic responses. In some examples, the user may be able to label users in his contact lists(s)/address book(s) as contacts who can, in particular, automatically receive single contextual outputs as responses if the user is unable to respond to their incoming communications. Such contacts may be considered to be categorised as "auto context reply" contacts.
Figures 9a-9b illustrate example apparatus/devices 900, 950 which are configured to present a user interface option to a user for transmission of a single contextual output to a selected contact or group of contacts.
Figure 9a shows a contact list presented to a user of an apparatus/device 900 on a touch sensitive display screen 902. Each contact entry 904 is displayed alongside a series of icons 906, 908, 910, 912 indicating how that contact may be communicated with using the apparatus/device 900. The user can select the corresponding icon displayed next to a particular contact's name to contact that person via the selected method. For example, a user may, for a particular contact, select the e-mail icon 910 to open an e-mail application with a blank e-mail addressed to that contact, select the "sms" icon 908 to open a message editor with the message history between the user and that contact displayed and with a blank message window for continuing the message thread, or select the call icon 906 to place a telephone call to that contact.
In this example, the user may also select the "paperclip" icon 912 which would cause the apparatus 900 to transmit a single contextual output based on the users current context to that contact. Upon selecting the paperclip icon 912, the apparatus/device 900 is configured to collate contextual content data associated with the users current context and prepare a single contextual output, and transmit it to the selected contact. The single contextual output may be transmitted by the method most frequently used to communicate visual-based messages with that contact, or via the cheapest transmission option available with that contact, or after presenting a user with different available options for a transmission method, for example.
Figure 9b illustrates an apparatus/device 950 displaying a social media application on a touch sensitive display screen 952. In this example the user is presented with a "context update" virtual button 954 which the user can select to cause the apparatus 950 to collate contextual content data associated with the user's current context and prepare a single contextual output, and post it on the user's social media page.
As another example, if a user is composing an e-mail in an e-mail application, or an MMS message in a messaging editor, the user may be presented with an on-screen virtual "paperclip" button so that a single contextual output may be collated and attached for transmission with the message being composed. The message may be sent to one or more contacts (new or existing). This may be likened to a real-time virtual postcard. Of course in the above examples, a different user input may be used rather than interacting with a "paperclip" button, to send a single contextual output.
In some examples, the apparatus/device may be configured such that one or more favourite sharing actions are indicated by options which are pinned to a homescreen of the apparatus/device. For example, a user may configure a "share context with friends" button on a home page which, if selected, causes the apparatus/device to collate current contextual data into a single contextual output and automatically e-mail this single contextual output to contacts marked as "friends" in the user's address book. Other such favourite sharing actions may be, for example, "post to social media page", "MMS to family", or "e-mail to hobby/sports group", for example.
Single contextual outputs may be transmitted, for example, by e-mail; SMS; MMS; social media, upload to a website, near field communication, Bluetooth, other wireless communication and/or wired communication.
Apparatus/devices disclosed here may be one or more of: a portable electronic device, a mobile phone, a smartphone, a tablet computer, a surface computer, a laptop computer, a personal digital assistant, a graphics tablet, a pen-based computer, a digital camera, a watch, a non-portable electronic device, a desktop computer, a monitor/display, a household appliance, a server, or a module for one or more of the same.
Figure ba shows an example of an apparatus 1000 in communication with a remote server. Figure lOb shows an example of an apparatus 1000 in communication with a "cloud" for cloud computing. In figures ba and lOb, apparatus 1000 (which may be apparatus 100, 200 or 300) is also in communication with a further apparatus 1002. The apparatus 1002 may be a touch screen display an input/output device, or a sensor for example. In other examples, the apparatus 1000 and further apparatus 1002 may both be comprised within a device such as a portable communications device or PDA.
Communication may be via a communications unit, for example.
Figure ba shows the remote computing element to be a remote server 1004, with which the apparatus 1000 may be in wired or wireless communication (e.g. via the internet, Bluetooth, NFC, a USB connection, or any other suitable connection as known to one skilled in the art). In figure lOb, the apparatus 1000 is in communication with a remote cloud 1010 (which may, for example, be the Internet, or a system of remote computers configured for cloud computing). For example, one or more items of pre-stored content, such as images, textual messages, symbols, calendar entries, and/or other media information may be stored remotely at the server 1004/cloud 1010 and be accessible by the apparatus 1000 for collation. As another example, the apparatus 1000 may control the collation of the plurality of contextual content data at the remote server or cloud (and may send any data which is not available at the server/cloud to the server/cloud for collation), for subsequent access of the single contextual output by the apparatus 1000. In other examples the remote server 1004/cloudlOlO may provide the single contextual output to a third party device, and/or store it for later reference by the user.
Figure ha illustrates a method 1100 according to an example of the present disclosure.
The method comprises, based on a determined current context of a user of an electronic device! collating together a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device.
Figure 12 illustrates schematically a computer/processor readable medium 1200 providing a program according to an example of this disclosure. In this example, the computer! processor readable medium is a disc such as a Digital Versatile Disc (DVD) or a compact disc (CD). In other examples, the computer readable medium may be any medium that has been programmed in such a way as to carry out the functionality herein described.
The computer program code may be distributed between the multiple memories of the same type, or multiple memories of a different type, such as ROM, RAM, flash, hard disk, solid state, etc. Any mentioned apparatus/device/server and/or other features of particular mentioned apparatus/device/server may be provided by apparatus arranged such that they become configured to carry out the desired operations only when enabled, e.g. switched on, or the like. In such cases, they may not necessarily have the appropriate software loaded into the active memory in the non-enabled (e.g. switched off state) and only load the appropriate software in the enabled (e.g. on state). The apparatus may comprise hardware circuitry and/or firmware. The apparatus may comprise software loaded onto memory. Such software/computer programs may be recorded on the same memory/processor/functional units and/or on one or more memories/processors/ functional units.
In some examples, a particular mentioned apparatus/device/server may be pre-programmed with the appropriate software to carry out desired operations, and wherein the appropriate software can be enabled for use by a user downloading a "key", for example, to unlock/enable the software and its associated functionality. Advantages associated with such examples can include a reduced requirement to download data when further functionality is required for a device, and this can be useful in examples where a device is perceived to have sufficient capacity to store such pre-programmed software for functionality that may not be enabled by a user.
Any mentioned apparatus/circuitry/elements/processor may have other functions in addition to the mentioned functions, and that these functions may be performed by the same apparatus/circuitry/elements/processor. One or more disclosed aspects may encompass the electronic distribution of associated computer programs and computer programs (which may be source/transport encoded) recorded on an appropriate carrier (e.g. memory, signal).
Any computer" described herein can comprise a collection of one or more individual processors/processing elements that may or may not be located on the same circuit board, or the same region/position of a circuit board or even the same device. In some examples one or more of any mentioned processors may be distributed over a plurality of devices.
The same or different processor/processing elements may perform one or more functions described herein.
The term "signalling' may refer to one or more signals transmitted as a series of transmitted and/or received electrical/optical signals. The series of signals may comprise one, two, three, four or even more individual signal components or distinct signals to make up said signalling. Some or all of these individual signals may be transmitted/received by wireless or wired communication simultaneously, in sequence, and/or such that they temporally overlap one another.
With reference to any discussion of any mentioned computer and/or processor and memory (e.g. including ROM, CD-ROM etc), these may comprise a computer processor, Application Specific Integrated Circuit (ASIC), field-programmable gate array (FPGA), and/or other hardware components that have been programmed in such a way to carry out the inventive function.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole, in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that the disclosed aspects/examples may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the disclosure.
While there have been shown and described and pointed out fundamental novel features as applied to examples thereof, it will be understood that various omissions and substitutions and changes in the form and details of the devices and methods described may be made by those skilled in the art without departing from the scope of the disclosure.
For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the disclosure. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or examples may be incorporated in any other disclosed or described or suggested form or example as a general matter of design choice.
Furthermore, in the claims means-plus-function clauses are intended to cover the structures described herein as performing the recited function and not only structural equivalents, but also equivalent structures. Thus although a nail and a screw may not be structural equivalents in that a nail employs a cylindrical surface to secure wooden parts together, whereas a screw employs a helical surface, in the environment of fastening wooden parts, a nail and a screw may be equivalent structures.

Claims (26)

  1. CLAIMS1. An apparatus comprising: at least one processor; and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following: based on a determined current context of a user of an electronic device, collate a plurality of contextual content data associated with the users determined current context for provision as a single contextual output.
  2. 2. The apparatus of claim 1, wherein the apparatus is configured to collate the plurality of contextual content data for provision as a single contextual output to one or more of a third party device and the user of the electronic device.
  3. 3. The apparatus of claim 1, wherein the current context comprises one or more of: a current user location; a current user activity; a current user environment; and a milestone associated with the user.
  4. 4. The apparatus of claim 1, wherein the apparatus is configured to determine the current context of a user using a user input indication of the current context, and wherein the user input indication comprises one or more of: an out-of-office current context input indication; an unavailable current context input indication; a particular current user activity context input indication; a particular current user location context input indication; and a particular current social media status context indication.
  5. 5. The apparatus of claim 1, wherein the apparatus is configured to determine the current context of a user using an automatic determination of a current activity being performed by the user, and wherein the automatic determination comprises one or more of determination of the current activity by the electronic device or an apparatus associated with the electronic device.
  6. 6. The apparatus of claim 1, wherein the apparatus is configured to determine the current context of a user using an automatic determination of a current activity being performed by the user, and wherein the automatic determination comprises one or more of determination of motion characteristics and determination of current user calendar event data by the electronic device or an apparatus associated with the electronic device.
  7. 7. The apparatus of claim 1, wherein the contextual content data comprises one or more of pre-stored image content data, pre-stored sound content data, pre-stored textual content data, current location data, current time data, current weather data, and data representing recently played music.
  8. 8. The apparatus of claim 7, wherein the pre-stored data is stored on the electronic device or available to the electronic device from a remote source.
  9. 9. The apparatus of claim 1, wherein the single contextual output comprises one or more of: a composite image formed from the plurality of contextual content data; an image annotated with text, formed from image content data and textual content data; an image with an audio file, formed from image content data and audio content data; and an image with a movie file, formed from image content data and movie content data.
  10. 10. The apparatus of claim 1, wherein the apparatus is configured to allow user input of new data to be added to the collated contextual content data for provision as the single contextual output to the third party device.
  11. 11. The apparatus of claim 1, wherein the single contextual output comprises pre-stored collated content data, the pre-stored collated content data stored prior to the determination of the current context of the user.
  12. 12. The apparatus of claim 1, wherein the apparatus is configured to collate the contextual content data by matching metadata for respective pre-stored contextual content data with the determined current context of the user.
  13. 13. The apparatus of claim 12, wherein the metadata comprises one or more of: a metadata label associated with pie-stored content data; and a word and/or phrase within pre-stored contextual text content data.
  14. 14. The apparatus of claim 13, wherein the metadata label comprises one or more of: a person's name, a place name, an event name, a song title, an album title, an artist name, a date, and a time.
  15. 15. The apparatus of claim 1, wherein the contextual content data comprises user generated data, the data generated during a previous context occasion for the user.
  16. 16. The appaiatus of claim 1, wherein the appaiatus is configuied to deteimine the curient context of a user.
  17. 17. The apparatus of claim 1, wherein the apparatus is configured to determine the curient context of the usei by using one oi moie of a usei input indication of the cuirent context and an automatic determination of a current activity being performed by the user.
  18. 18. The apparatus of claim 1, wherein the apparatus is configured to check the deteimined curient context against a piedetermined number of contexts and, based on matching the determined current context against one or moie of the piedeteimined number of contexts, collate the plurality of contextual content data associated with the user's deteimined cuirent context for provision as the single contextual output to the third party device.
  19. 19. The apparatus of claim 1, wherein the apparatus is configured to create a new single contextual output based on a previously stored single contextual output if the current and previous user contexts are similar according to a predetermined similarity criterion.
  20. 20. The apparatus of claim 1, wherein the apparatus is configured to prompt the user to at least one of: confirm starting the collation, confirm selection of the contextual content data items to use in the collation, and transmit the single contextual output to the third party device.
  21. 21. The apparatus of claim 1, wherein the apparatus is configured to transmit the single contextual output to the third party device having a particular predefined categorisation.
  22. 22. The apparatus of claim 1, wherein the apparatus is configured to transmit the single contextual output to the third party device as a response to a particular type of message received from a third party.
  23. 23. The apparatus of claim 1, wherein the apparatus is configured to provide the single contextual output to the third party device by one or more of e-mail; SMS; MMS; social media; upload to a website; near field communication; Bluetooth, and wired communication.
  24. 24. The apparatus of claim 1, wherein the apparatus is one or more of: a portable electronic device, a mobile phone, a smartphone, a tablet computer, a surface computer, a laptop computer, a personal digital assistant, a graphics tablet, a pen-based computer, a digital camera, a watch, a non-portable electronic device, a desktop computer, a monitor/display, a household appliance, a server, or a module for one or more of the same.
  25. 25. A method comprising: based on a determined current context of a user of an electronic device, collating together a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device.
  26. 26. A computer readable medium comprising computer program code stored thereon, the computer readable medium and computer program code being configured to, when run on at least one processor perform at least the following: based on a determined current context of a user of an electronic device, collate a plurality of contextual content data associated with the user's determined current context for provision as a single contextual output to a third party device.
GB1316047.8A 2013-09-10 2013-09-10 Apparatus for collating content items and associated methods Withdrawn GB2517998A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1316047.8A GB2517998A (en) 2013-09-10 2013-09-10 Apparatus for collating content items and associated methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1316047.8A GB2517998A (en) 2013-09-10 2013-09-10 Apparatus for collating content items and associated methods

Publications (2)

Publication Number Publication Date
GB201316047D0 GB201316047D0 (en) 2013-10-23
GB2517998A true GB2517998A (en) 2015-03-11

Family

ID=49486958

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1316047.8A Withdrawn GB2517998A (en) 2013-09-10 2013-09-10 Apparatus for collating content items and associated methods

Country Status (1)

Country Link
GB (1) GB2517998A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713670A (en) * 2017-02-21 2017-05-24 广东小天才科技有限公司 Alarm clock reminding method applied to mobile terminal and mobile terminal
US9973647B2 (en) 2016-06-17 2018-05-15 Microsoft Technology Licensing, Llc. Suggesting image files for deletion based on image file parameters
WO2018163173A1 (en) * 2017-03-09 2018-09-13 Agt International Gmbh Method and apparatus for sharing materials in accordance with a context

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287061B (en) * 2020-11-17 2024-05-31 深圳市泰同科技有限公司 Method for splicing street view elevation map by using network open data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080171573A1 (en) * 2007-01-11 2008-07-17 Samsung Electronics Co., Ltd. Personalized service method using user history in mobile terminal and system using the method
US20110029538A1 (en) * 2009-07-28 2011-02-03 Geosolutions B.V. System for creation of content with correlated geospatial and virtual locations by mobile device users
US20110047463A1 (en) * 2009-08-24 2011-02-24 Xerox Corporation Kiosk-based automatic update of online social networking sites
US20110083101A1 (en) * 2009-10-06 2011-04-07 Sharon Eyal M Sharing of Location-Based Content Item in Social Networking Service
US20120011450A1 (en) * 2010-07-07 2012-01-12 Paul To Methods and systems for generating and sharing an interactive virtual meeting space
WO2013022156A1 (en) * 2011-08-08 2013-02-14 Samsung Electronics Co., Ltd. Life-logging and memory sharing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080171573A1 (en) * 2007-01-11 2008-07-17 Samsung Electronics Co., Ltd. Personalized service method using user history in mobile terminal and system using the method
US20110029538A1 (en) * 2009-07-28 2011-02-03 Geosolutions B.V. System for creation of content with correlated geospatial and virtual locations by mobile device users
US20110047463A1 (en) * 2009-08-24 2011-02-24 Xerox Corporation Kiosk-based automatic update of online social networking sites
US20110083101A1 (en) * 2009-10-06 2011-04-07 Sharon Eyal M Sharing of Location-Based Content Item in Social Networking Service
US20120011450A1 (en) * 2010-07-07 2012-01-12 Paul To Methods and systems for generating and sharing an interactive virtual meeting space
WO2013022156A1 (en) * 2011-08-08 2013-02-14 Samsung Electronics Co., Ltd. Life-logging and memory sharing

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9973647B2 (en) 2016-06-17 2018-05-15 Microsoft Technology Licensing, Llc. Suggesting image files for deletion based on image file parameters
CN106713670A (en) * 2017-02-21 2017-05-24 广东小天才科技有限公司 Alarm clock reminding method applied to mobile terminal and mobile terminal
WO2018163173A1 (en) * 2017-03-09 2018-09-13 Agt International Gmbh Method and apparatus for sharing materials in accordance with a context

Also Published As

Publication number Publication date
GB201316047D0 (en) 2013-10-23

Similar Documents

Publication Publication Date Title
KR101899548B1 (en) Method and apparatus for collecting of feed information in a portable terminal
Barkhuus et al. Empowerment through seamfulness: smart phones in everyday life
KR101958376B1 (en) Loading a mobile computing device with media files
US9374670B2 (en) System and method for determining a location-based preferred media file
US9953101B1 (en) Customized home screens for electronic devices
US20110161085A1 (en) Method and apparatus for audio summary of activity for user
KR102531656B1 (en) System for providing life log service and service method thereof
CN106796510A (en) For presenting and equipment, method and the graphic user interface of application be installed
US20100293104A1 (en) System and method for facilitating social communication
KR20150017015A (en) Method and device for sharing a image card
CN105408897B (en) For collecting the method and device thereof of multimedia messages
KR101943988B1 (en) Method and system for transmitting content, apparatus and computer readable recording medium thereof
US20190102139A1 (en) Systems and methods of associating media content with contexts
CN107145498A (en) Method, system and medium for the prompting watched content to be presented
KR20220062664A (en) content item module arrays
US20130339990A1 (en) Apparatus, information processing method and program
GB2517998A (en) Apparatus for collating content items and associated methods
US20150149452A1 (en) Digital sticky note
KR20140090114A (en) Keyword search method and apparatus
US20130204414A1 (en) Digital audio communication system
US20130288701A1 (en) User interface, associated apparatus and methods
US20170076407A1 (en) Location-based data structure information retrieval and modification
Purdy The Complete Android Guide
CA2806485C (en) System and method for determining a location-based preferred media file
Eddy et al. Google on the Go: Using an Android-Powered Mobile Phone

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)