GB2495978A - Smartphone application - Google Patents
Smartphone application Download PDFInfo
- Publication number
- GB2495978A GB2495978A GB1118632.7A GB201118632A GB2495978A GB 2495978 A GB2495978 A GB 2495978A GB 201118632 A GB201118632 A GB 201118632A GB 2495978 A GB2495978 A GB 2495978A
- Authority
- GB
- United Kingdom
- Prior art keywords
- text
- user
- database
- properties
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/955—Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/532—Query formulation, e.g. graphical querying
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Library & Information Science (AREA)
- Mathematical Physics (AREA)
- Information Transfer Between Computers (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Using a device capable of capturing properties of objects, e.g. by taking a photograph of the object, associating those properties to a unique identifier, e.g. a hashtag, and storing the properties and identifier in a first database associating user input with the unique identifier and storing the user input and unique identifier in a second database such as a social media database and displaying to the user of the device, inputs from other users associated with the unique identifiers stored in the second database.
Description
BACKGROUND OF THE INVENTION
Devices capable of capturing spatio-temporal properties of objects are very common, and include hand-held devices such as smart phones equipped with cameras and microphones. Such devices are equipped with powerful processing units capable of executing computer programmes, display units and input units such as keyboards and are well known to skilled persons.
Social media is an emerging form or media that according to Wikipedia ttp:/fen.wikipedivorg/wk1/Sociai media, fetched on 12 october 2011] "refers to the use of web-based and mobile technologies to turn conirnun (cation into an interactive dialogue. Andreas Koplon and Michael Haenlein define social media as "a group of Internet-based applications that build on the ideological and technologicolfoundations of Web 2.0, and that allow the creation and exchange of user-generated content".
More generally a social network can be defined as a database holding interlinked inputs generated by a multitude of users and accessible via a communication network. Notable examples of commercial products are Twitter (TM), Facebook (TM), Foursquare (TM) and Linkedln (TM). A "social media" stream is a collection of such inputs selected, e.g., in accordance to a particular user, a topic, or metadata associated to such inputs.
Social media is powerful and of growing importance but information in such networks is growing at such a rate that it is becoming difficult for users to find what is relevant to them and/or her interests.
Solutions to this problem include metadata tagging social media streams, or individual pieces of user generated content.
Perhaps the most notable example of that is "hahstags" (e.g. #londonolympics2Ol2) used by Twitter(TM). Other commercial products use geo-location (for instance based on GPS data), notably FourSquare (TM).
However, significant usability and application gaps still remain in these offering.
Of particular relevance is the problem of creating social media streams associated to objects or parts of the physical environment, such as a building, or a picture in a magazine, or similar.
For instance, using Twitter (TM) as an example, if a user wanted to start a social media stream related to e.g. a photo on a magazine, s/he would have to create a new "hashtag", but that "hashtag" would be next to impossible for other people to find or guess unless such other people are known to that first user and monitoring his/her stream of input on Twitter.
DESCRIPTION OF THE INVENTION
The present invention proposes a device and a method to facilitate creating, finding, augmenting and linking social media streams to objects or the physical environment.
The main inventive step is a device and a method to allow each of a multitude of users to use unique metadata associated to properties of multiple instances of an object as a mechanism for creating, adding to. finding and using a social media stream associated to objects, such unique metadata derived from properties of said object determined with said device by at least one of said multitude of users and by linking the input by at least one of said multitude of users to said unique metadata and adding such input and unique metadata together to the social media stream.
In order that the invention may be well understood, an embodiment of the invention will now be described by way of example only and with reference to the accompanying drawings whereby: * Twitter(TM) will be used as the "Social Network" example; * the "Hashtag" as an example of the metadata mechanism (e.g. "Actorl738", Fig. 3,19); * A "Social Media Stream" example being a collection of user-generated inputs associated with a Hashtag on Twitter (TM); * a smart phone (1) equipped with a camera (4), a display (2) and a keyboard (3), able to exchange data via a communications network (11) and capable of implementing computer instructions implementing the inventive method described in this document as an example of device (the "Device"); * such communication networks (11) being "the Internet"; * the physical "Object" being a photo (6) in a magazine (5); * the object's properties captured are derived from an image of the Object, and * user input is "Text" (e.g. (21)) typed on the device's keyboard (3); * a "Hashtag Database" (12) referring to a database accessible by the device over the Internet which stores a list of associations between object properties and Hashtags.
To those skilled in the art it will be manifest how the principles of the invention could be applied to a variety of devices capable of: * executing computer instructions * exchanging data over a communication network, * accessing or using a variety of databases accessible over a communications network holding interlinked inputs generated by a multitude of users, * capturing object properties, receiving user input, processing data, and displaying information to a user, * object properties could include single images or sequence of images of the object as capture by a camera embedded in the device, audio, or other distinguishing properties, such as coded information (e.g. Quick Recognition Codes (h.ttpsJjer,.wikipedia.org,'wiki/QR.. code, fetched on 24 October 2011) printed or otherwise attached to the object.
* using any metadata scheme that could be derived from object properties and that can be associated to user input and where the term "object" could in fact refer to one or more portions of discrete objects or the continuous physical environment.
Moreover while in both Fig. land Fig. 3 the device, the magazine and the photo are denoted with the same numbers (1,5 and 6, respectively), it will be manifest from the description of the preferred embodiment that: * the devices may be different instances of the same device, or different devices embodying the invention * the magazine and the photo therein may be two physically different copies of the same magazine.
In the preferred embodiment, Object_A, Object_B and Object_C are three instance of an Object, that is a photo (6) in a magazine (5), and Object_A is that photo in a copy of the magazine held by a user in e.g. London, UK Object_B is that photo in another copy of the magazine held by another user in e.g. Bristol, UK and Object_C is that photo in yet another copy of the magazine held by yet another user in e.g. Leeds, UK.
Lqually, User_A uses a Device_A (1) embodying the invention, a User_B uses another Device_B embodying the invention, and User_C uses another Device_C embodying the invention.
In accordance with a first aspect of the invention there is provided a device Device_A and a method whereby: a) A User_A points (7) Device_A's (1) camera (4) to Object_A (6) and b) an image (2) of Object_A is captured by Device_A and processed to derive object properties (8) and c) such object properties are converted by Device_A into a unique Hashtag (9) and d) such properties together with the Hashtag are sent over the Internet (11) and added to the Hashtag Database (12) and e) User_A inputs (13) some text Text_A on Device_A and f) Device_A associates the Hashtag to said Text_A and g) such combined Text_A and Hashtag and sent (14) over the Internet (11) and added to the Social Network (15).
For instance, with the above steps User_A starts a new text comment stream associated with Object (6).
In accordance with a second aspect of the invention, there is provided a device Device_B (1) and a method (Fig. 2) whereby, with the understanding that now Fig 1,2 and 3 refer to the method as applied to another user User_B and her context: a) Another user User_B points (7) Device_B's (1) camera (4) to Object_B and b) an image (2) of Object_B is captured (8) by Device_B and processed to derive object properties (8) and c) Device_B accesses the Hashtag Database (12) over the Internet (11) to establish (10) from said properties whether Object B is an instance of Object, e.g. by establishing similarity with properties of Object_A which were previously stored by Device_A into the Hashtag Database (12); d) if it is so, the device fetches (10) the Hashtag corresponding to the properties of Object_A from the Hashtag Database (12) and e) User_B inputs (13) some Text_B on Device_B and f) Device_B associates the Hashtag metadata to said Text_B and g) such combined Text_B and the Hashtag are sent (14) over the Internet (11) and added to the Social Network (15) -see Figure 3(20)21,22)23).
For instance, with the above steps User_B would have added a new text comment Text_B to a comment stream associated with Object.
In accordance with third aspect of the invention) illustrated in Fig. 3, there is provided a device Device_C (1) and a method whereby, with the understanding that now Fig 1,2 and 3 refer to the method as applied to yet another user User_C and her context: a) Another User_C holds Device_C's camera (4) pointing (7) to Object_C and b) an image (2) of Object_C is captured and processed by Device_C to derive properties (8) and c) the devices accesses the Hashtag Database (12) over the Internet (11) and establishes (9) from such properties whether Object_C is an instance of Object and d) if so the user is alerted (via e.g. a sound or a vibration of the Device_C) and e) the device fetches (10) the Hashtag (19) corresponding to the properties of Object_A from the Hashtag Database and f) the device accesses the Social Network (15) over the Internet (11) to retrieve Text information associated to the Hashtag (19), such as Text A (21) and Text_B (23) and g) the devices displays (18) Text_A (21) and Text_B (23) on the devices display (2) In an alternative embodiment such Text_A (21) and Text_B (23) can be displayed (18) on the device's display overlaid with an image of Object_C as captured by the device's camera.
For instance, with the above steps, User_C would wave Device_C over Object_C, be alerted of the availability of comments from other users associated to that object (Text_A and Text_B) and display such comments (20,21,22,23) on the device's display. It should be evident that User_C could add to the comment stream following steps similar to those of the second aspect of this invention.
Extracting properties from objects, e.g. based on a captured image of such object, is known to those skilled in the art, for instance with the SlFTAlgorithm (David Lowe, http.I/www.cs.ubc.ca['iowefkeypoJs fetched on 12 October 2011).
Robustly matching properties from different instances of the same object (e.g. from two different images of said object) is more challenging but is also known to skilled persons and approaches are embodied in a small number of commercial offering, such as from Mobile Acuity lileacuity.corn) and Cortexica (www.cortexica.com). Associating such properties to a unique hashtag is also well known to skilled persons.
Whilst preferred embodiments of the present invention have been described above, changes and modifications will be apparent to the skilled person and fall within the scope of the invention as defined by the following claims.
DRAWINGS
In order that the invention may be well understood, an embodiment of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: FIG. lisa schematic drawing of a device embodying of the invention, and a magazine containing a photo; FIG. 2 is a schematic diagram of the procedural steps of an embodiment of the invention; FIG. 3 is a schematic drawing of the device in FIG 1 displaying social media content associated to a "hashtag" upon pointing the device to the object to which an "hashtag" is associated.
Claims (1)
- <claim-text>CLAIMSThe invention claimed is: 1. A device with means for capturing properties of at least one object, receiving user input, displaying information and exchanging data over a communications network, associating such properties to a unique identifier, associating user input to such identifier, such input added via a communications network to a database of inter-linked inputs associated to the unique identifier from at least one other user, means of displaying information derived from said database on the device display, and means of interacting with said database.</claim-text> <claim-text>2. A device as claimed in claim 1, wherein the captured properties are derived by at least one image of an object.</claim-text> <claim-text>3. A device as claimed in claim 1, wherein the captured properties are sounds.</claim-text> <claim-text>4. A device as claimed in claim 1 where the user input is text characters.</claim-text> <claim-text>5. A device as claimed in claim 1 where the user input is audio.</claim-text> <claim-text>6. A device as claimed in claim 1 where the user input is at least one image.</claim-text> <claim-text>7. A device such as claimed in claim 1 with means of alerting the user of the device upon associating an identifier with the properties captured from at least one object and when other user input associated to such identifier is available.</claim-text> <claim-text>8. A device as claimed in claim 1 where the database of user input is derived from a multitude of user inputs in a social network.</claim-text> <claim-text>9. A device as claimed in claim 1 where the unique identifier is a tag added by users to their input.</claim-text> <claim-text>10. A device as claimed in claim 1 where the unique identifier is pre-defined by a user.</claim-text> <claim-text>11. A device as claimed in claim 1 where information derived from said database is displayed together and in concomitance with at least one image of the object.</claim-text> <claim-text>12. A method for a user to interact with a database of inter-linked inputs provided by other users comprising a. Pointing a device towards at least one object; b. The device capturing properties of at least one such object; c. Associating such properties to a unique identifier; d. Adding via a communication network such unique identifier and said properties to a first database; e. Associating user input to such unique identifier; f. Adding via a communications network such user input and the identifier to a second database storing other input associated to such unique identifier and such input having been generated by at least one other user; g. Means of retrieving via a communications network from such second database at least one input from at least one other user and such input being associated with said unique identifier.h. Means of displaying said input 13. A method as claimed in claim 12 where the captured properties are derived by at least one image of an object.14. A method as claimed in claim 12, wherein the captured properties are sounds.15. A method as claimed in claim 12 where the user input is text characters.16. A method as claimed in claim 12 where the user input is in the form of audio.17. A method as claimed in claim 12 where the use input is at least one image.18. A method as claimed in claim 12 where a user is alerted of the existence of user input associated to an identifier in said second database when such identifier can be uniquely associated to a unique identifier derived from the properties captured from at least one object pointed at by said user.19. A method as claimed in claim 12 where said second database of user input is derived from a multitude of user inputs in a social network.20. A method as claimed in claim 12 where the unique identifier is a tag added by users to their input.21. A method as claimed in claim 12 where the unique identifier is pre-defined by a user.22. A method as claimed in claim 12 where information derived by said database is displayed together and in concomitance with a display of at least one image of the object.</claim-text>
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1118632.7A GB2495978A (en) | 2011-10-28 | 2011-10-28 | Smartphone application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1118632.7A GB2495978A (en) | 2011-10-28 | 2011-10-28 | Smartphone application |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201118632D0 GB201118632D0 (en) | 2011-12-07 |
GB2495978A true GB2495978A (en) | 2013-05-01 |
Family
ID=45373563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1118632.7A Withdrawn GB2495978A (en) | 2011-10-28 | 2011-10-28 | Smartphone application |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2495978A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9244944B2 (en) | 2013-08-23 | 2016-01-26 | Kabushiki Kaisha Toshiba | Method, electronic device, and computer program product |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2397149A (en) * | 2003-01-09 | 2004-07-14 | Eventshots Com Inc | Associating a subject's identity with a photograph |
EP1503325A1 (en) * | 2003-08-01 | 2005-02-02 | The Secretary of State acting through Ordnance Survey | Smart symbols |
US20050162523A1 (en) * | 2004-01-22 | 2005-07-28 | Darrell Trevor J. | Photo-based mobile deixis system and related techniques |
US20060115108A1 (en) * | 2004-06-22 | 2006-06-01 | Rodriguez Tony F | Metadata management and generation using digital watermarks |
EP1710717A1 (en) * | 2004-01-29 | 2006-10-11 | Zeta Bridge Corporation | Information search system, information search method, information search device, information search program, image recognition device, image recognition method, image recognition program, and sales system |
US20080130960A1 (en) * | 2006-12-01 | 2008-06-05 | Google Inc. | Identifying Images Using Face Recognition |
GB2449125A (en) * | 2007-05-11 | 2008-11-12 | Sony Uk Ltd | Metadata with degree of trust indication |
US20090119572A1 (en) * | 2007-11-02 | 2009-05-07 | Marja-Riitta Koivunen | Systems and methods for finding information resources |
WO2009104193A1 (en) * | 2008-02-24 | 2009-08-27 | Xsights Media Ltd. | Provisioning of media objects associated with printed documents |
US20100029326A1 (en) * | 2008-07-30 | 2010-02-04 | Jonathan Bergstrom | Wireless data capture and sharing system, such as image capture and sharing of digital camera images via a wireless cellular network and related tagging of images |
US20100076976A1 (en) * | 2008-09-06 | 2010-03-25 | Zlatko Manolov Sotirov | Method of Automatically Tagging Image Data |
-
2011
- 2011-10-28 GB GB1118632.7A patent/GB2495978A/en not_active Withdrawn
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2397149A (en) * | 2003-01-09 | 2004-07-14 | Eventshots Com Inc | Associating a subject's identity with a photograph |
EP1503325A1 (en) * | 2003-08-01 | 2005-02-02 | The Secretary of State acting through Ordnance Survey | Smart symbols |
US20050162523A1 (en) * | 2004-01-22 | 2005-07-28 | Darrell Trevor J. | Photo-based mobile deixis system and related techniques |
EP1710717A1 (en) * | 2004-01-29 | 2006-10-11 | Zeta Bridge Corporation | Information search system, information search method, information search device, information search program, image recognition device, image recognition method, image recognition program, and sales system |
US20060115108A1 (en) * | 2004-06-22 | 2006-06-01 | Rodriguez Tony F | Metadata management and generation using digital watermarks |
US20080130960A1 (en) * | 2006-12-01 | 2008-06-05 | Google Inc. | Identifying Images Using Face Recognition |
GB2449125A (en) * | 2007-05-11 | 2008-11-12 | Sony Uk Ltd | Metadata with degree of trust indication |
US20090119572A1 (en) * | 2007-11-02 | 2009-05-07 | Marja-Riitta Koivunen | Systems and methods for finding information resources |
WO2009104193A1 (en) * | 2008-02-24 | 2009-08-27 | Xsights Media Ltd. | Provisioning of media objects associated with printed documents |
US20100029326A1 (en) * | 2008-07-30 | 2010-02-04 | Jonathan Bergstrom | Wireless data capture and sharing system, such as image capture and sharing of digital camera images via a wireless cellular network and related tagging of images |
US20100076976A1 (en) * | 2008-09-06 | 2010-03-25 | Zlatko Manolov Sotirov | Method of Automatically Tagging Image Data |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9244944B2 (en) | 2013-08-23 | 2016-01-26 | Kabushiki Kaisha Toshiba | Method, electronic device, and computer program product |
Also Published As
Publication number | Publication date |
---|---|
GB201118632D0 (en) | 2011-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10313287B2 (en) | Methods and systems for displaying messages in an asynchronous order | |
JP6227766B2 (en) | Method, apparatus and terminal device for changing facial expression symbol in chat interface | |
US11081142B2 (en) | Messenger MSQRD—mask indexing | |
US20220200938A1 (en) | Methods and systems for providing virtual collaboration via network | |
US8436911B2 (en) | Tagging camera | |
US20180241705A1 (en) | Methods and systems for generating an ephemeral content message | |
US9485207B2 (en) | Processing of messages using theme and modality criteria | |
US20170351385A1 (en) | Methods and Systems for Distinguishing Messages in a Group Conversation | |
US20110013810A1 (en) | System and method for automatic tagging of a digital image | |
US20120060105A1 (en) | Social network notifications | |
US20130156275A1 (en) | Techniques for grouping images | |
US20120179958A1 (en) | Mapping a Third-Party Web Page to an Object in a Social Networking System | |
US11157134B2 (en) | Interfaces for a messaging inbox | |
WO2014100205A1 (en) | Tagging posts within a media stream | |
WO2007127644A2 (en) | Multimedia sharing in social networks for mobile devices | |
WO2013089674A1 (en) | Real-time mapping and navigation of multiple media types through a metadata-based infrastructure | |
CN106063231A (en) | Information transmission system, information reception method, device and system | |
TW201724038A (en) | Monitoring service system, computer program product, method for service providing by video monitoring and method for service activating by video monitoring | |
US20120246581A1 (en) | Mechanisms to share opinions about products | |
EP2858310A1 (en) | Association of a social network message with a related multimedia flow | |
US9906485B1 (en) | Apparatus and method for coordinating live computer network events | |
CN105450510B (en) | Friend management method, device and server for social network-i i-platform | |
KR101403783B1 (en) | Virtual space providing system for distinct step of application execution | |
TW201508661A (en) | Events integrating method and system | |
GB2495978A (en) | Smartphone application |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |