WO2013072647A1 - Interactive image tagging - Google Patents

Interactive image tagging Download PDF

Info

Publication number
WO2013072647A1
WO2013072647A1 PCT/GB2011/001606 GB2011001606W WO2013072647A1 WO 2013072647 A1 WO2013072647 A1 WO 2013072647A1 GB 2011001606 W GB2011001606 W GB 2011001606W WO 2013072647 A1 WO2013072647 A1 WO 2013072647A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
item
tag
image element
tagging
Prior art date
Application number
PCT/GB2011/001606
Other languages
French (fr)
Inventor
Fraser Aldan ROBINSON
Robert Crawford
Jonathan Whitehead
Original Assignee
Robinson Fraser Aldan
Robert Crawford
Jonathan Whitehead
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robinson Fraser Aldan, Robert Crawford, Jonathan Whitehead filed Critical Robinson Fraser Aldan
Priority to PCT/GB2011/001606 priority Critical patent/WO2013072647A1/en
Publication of WO2013072647A1 publication Critical patent/WO2013072647A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Definitions

  • This invention relates to apparatus and methods for interactively tagging an image element corresponding to a selected item.
  • the invention allows for the interactive tagging of on-line images.
  • Variants of the invention allow for the selection and categorisation of image components, including searching for and matching to equivalent or near-equivalent images or components of images stored in a database.
  • Embodiments of the invention also describe methods of allowing persistent tagging of images, avoiding overlay collision and enhanced matching techniques.
  • the invention has general relevance in the field of image augmentation and machine intelligence.
  • the present invention aims to address at least some of these problems.
  • apparatus for interactively tagging an image element corresponding to a selected item, comprising: means (such as a first computer network interface) for receiving known items data in respect of a plurality of known items; means (such as a second computer network interface) for receiving tagging data relating to the image element; means (such as a processor) for determining from the tagging data and the known items data a known item matching the selected item; means (such as a third computer network interface) for forwarding an interactive data element tag relating to the matching known item for association with the image element; wherein the means for determining comprises means for comparing swatch data.
  • the use of swatch data in determining a matching item from the items described by the item data may allow for improved accuracy and greater speed in item determination - as well as provide a range of potential matching items.
  • the tagging data comprises selected item swatch data determined from the image element.
  • Swatch data is preferably representative of the image element and may reflect typical characteristics of the image element (and hence the corresponding selected item) such as for example colour, shading, texture, and/or material properties.
  • the apparatus further comprises means for determining swatch data from the tagging data.
  • the known item data comprises known item images.
  • the apparatus further comprises means for determining known item swatch data from the known item images.
  • the tagging data comprises classification data classifying the selected item.
  • the means for determining is adapted to determine a known item matching the selected item from a subset of the known items data determined by the classification data. This may speed item matching.
  • the interactive data element tag is adapted to indicate that a known item has been determined for the image element corresponding to the selected item.
  • the interactive data element tag may be adapted to provide access to matching known item data.
  • interactive data element tag may be adapted to provide means for forwarding the matching known item data.
  • the interactive data element tag is adapted to provide means for interacting with social media.
  • the known items data is received from at least one item supplier and the interactive data element tag is adapted to provide means for requesting a matching known item from the item supplier.
  • the means for determining a known item matching the selected item comprises: means for processing the image element of the selected item to generate a text representation of the selected item; and means for searching the known items data for known items matching the selected item in dependence on the text representation.
  • the apparatus further comprises means for processing the known item data to generate a text representation of the known item from the known item images.
  • the apparatus further comprises means for maintaining the association of an interactive data element tag on a means of display with the corresponding image element, comprising: means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
  • the apparatus further comprises means for consistently displaying an interactive data element tag with the corresponding image element on a means of display, comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected.
  • the apparatus is adapted to receive tagging data relating to the image element from a (first) client device.
  • the apparatus is adapted to forward the interactive data element tag to a (second) client device. This process may occur directly or indirectly.
  • apparatus for determining from data relating to an image element corresponding to a selected item and known items data a known item matching the selected item, the apparatus comprising: means for processing the image element to generate a text representation; and means for searching the known items data for known items matching the selected item in dependence on the text representation.
  • the known item data comprises known item images, and further comprising means for processing the known item data to generate a text representation of the known item from the known image images.
  • the apparatus further comprises means for receiving image element location data; and means for retrieving the image element.
  • the apparatus further comprises means for sorting the matching known items in dependence on the matching closeness.
  • the means for generating a text representation comprises execution of a Color and Edge Directivity Descriptor algorithm.
  • apparatus for maintaining the association of an interactive data element tag on a means of display with a corresponding image element comprising, means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
  • the polled properties comprise at least one of: position, source URL, visibility and presence.
  • the apparatus is adapted to poll properties with a frequency in dependence on their impact on performance. At least one of position and source URL may be polled more frequently than other properties.
  • the frequency of polling is determined according to a property of the means of display (for example, of the browser at the client device). The maximum frequency of polling may be determined according to a property of the means of display.
  • one cycle of polling of properties is completed before a subsequent cycle begins.
  • apparatus for consistently displaying an interactive data element tag with a corresponding image element on a means of display comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected.
  • client devices include client devices:
  • a client device adapted to transmit tagging data relating to an image element to a tagging apparatus (for example, a computer server); preferably, wherein the client device further comprises means for determining swatch data from the image element.
  • a tagging apparatus for example, a computer server
  • a client device adapted to receive an interactive data element tag relating to a known item for association with an image element.
  • the client device further comprises means (such as an applet or brower plug-in) for maintaining the association of the interactive data element tag on a means of display with the corresponding image element, comprising: means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
  • the client device further comprises means for consistently displaying the interactive data element tag with the corresponding image element on a means of display, comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected.
  • a method of interactively tagging an image element corresponding to a selected item comprising: receiving known items data in respect of a plurality of known items; receiving tagging data relating to the image element; determining from the tagging data and the known items data a known item matching the selected item; and forwarding an interactive data element tag relating to the matching known item for association with the image element; wherein the means for determining comprises means for comparing swatch data.
  • a method of determining from data relating to an image element corresponding to a selected item and known items data a known item matching the selected item comprising: processing the image element to generate a text representation; and searching the known items data for known items matching the selected item in dependence on the text representation.
  • a method of maintaining the association of an interactive data element tag on a means of display with a corresponding image element comprising: polling properties of the interactive data element tag and the corresponding image element; determining changes in the polled properties; and adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
  • a method of consistently displaying an interactive data element tag with a corresponding image element on a means of display comprising: generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; detecting a user interaction with the interactive data element tag; and hiding the dummy layer duplicate when a user interaction is detected.
  • tagging apparatus such as a computer server
  • the method further comprises determining swatch data from the image element.
  • the method further comprises maintaining the association of the interactive data element tag on a means of display with the corresponding image element, comprising: polling properties of the interactive data element tag and the corresponding image element; determining changes in the polled properties; and adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
  • the method further comprises consistently displaying the interactive data element tag with the corresponding image element on a means of display, comprising: generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; detecting a user interaction with the interactive data element tag; and hiding the dummy layer duplicate when a user interaction is detected.
  • the invention also provides a computer program and a computer program product for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
  • the invention also provides a signal embodying a computer program for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out the methods described herein and/or for embodying any of the apparatus features described herein.
  • the invention extends to methods and/or apparatus substantially as herein described with reference to the accompanying drawings.
  • Figure 1 shows in overview an online image tagging system
  • Figures 2 to 5 show example screenshots from a first embodiment of an online image tagging system
  • Figure 2 shows a screenshot of a web page with an embedded tagged image
  • Figure 3 shows an example of a pop-up menu of an image tag
  • Figure 4 shows an explanatory pop-up menu
  • Figure 5 shows a further explanatory pop-up menu
  • Figures 6 to 14 show example screenshots from a second embodiment of an online image tagging system
  • Figure 6 shows a screenshot of a web page with an embedded tag-enabled image
  • Figure 7 shows tags appearing on a tagged image
  • Figure 8 shows an explanatory pop-up menu
  • Figure 9 shows examples of a plurality of pop-up tag menus associated with different tagged image elements
  • Figure 10 shows an example of a scroll bar in a pop-up tag menu
  • Figure 11 shows the selection of a matching item in a pop-up tag menu
  • Figure 12 shows the selection of the "share” button, showing options for sharing item details via social media
  • Figure 13 shows a further example of multiple tags in an image
  • Figure 14 shows an example of a pop-up tag menu with a "shop” option
  • Figures 15 show the data flow, for the colour matching.
  • suitable computer servers may run common operating systems such as the Windows systems provided by Microsoft Corporation, OS X provided by Apple, various Linux or Unix systems or any other suitable operating system.
  • Suitable buffering, clustering and other load-balancing arrangements are used as appropriate in order to provide sufficient capacity when loads are high.
  • the various servers and client devices are interconnected as described in a computer network using appropriate networking protocols.
  • Corresponding web server software may be used, with web interfaces and other code written in any suitable language including Java, PHP and JavaScript.
  • a Microsoft .Net based stack may be used.
  • Suitable databases include ones based on SQL, for example as provided by Microsoft Corporation or those from Oracle or others.
  • Figure 1 shows an overview of an online networked image tagging system 1.
  • Publication server 10 for example a web sever, distributes content such as images 20 which are displayed, for example as a web page 30 in a web browser, on a user device 40 by user 45.
  • Web page 30 is also displayed on client device 50 of tagging user 55, who upon identifying or selecting an item or image element 60 of image 20, is enabled by online image tagging system 1 to select and transmit a swatch or sample data 65 of item 60 and/or provide classification data 68 regarding item 60 to tagging server 70.
  • a swatch is a representative sample of an image element corresponding to a selected item, with typical characteristics such as for example colour, shading, texture, and/or material properties.
  • Tagging server 70 is in communication via network interfaces with vendor servers 80, 90, and receives from them data 88, 98 in the form of a data feed regarding items stored in their respective item databases 85, 95.
  • this data 88, 98 is subsequently stored in tagging server database 75.
  • tagging server 70 Upon receiving swatch data 65 and/or classification data 68 regarding item 60 from tagging user 55, tagging server 70 seeks to determine a match in the data 88, 98 from the data feed (or, in some embodiments, from its database 75).
  • tagging server 70 supplies or forwards a tag 100 to a client device 40 which is superimposed on the image element for item 60 in image 20 and viewed by user 45.
  • User 45 therefore sees image 20 with item 60 identified by a tag 100, and by selecting the tag 100 sends a request 110 for further information about item 60 - either as shown directly from the appropriate vendor server 90, or alternatively via tagging server 70.
  • the tagging system performs three core processes:
  • the tagging system offers control of some of these processes to the vendor systems by means of an administration module.
  • the tagging of images 20 - or specific parts of images, relating to identifiable items 60 - provided online by a publisher may be performed by the publisher themselves or outsourced to an agency. Alternatively, image tagging may be undertaken by the public at large - so-called 'crowdsourced' tagging.
  • An authorised user 55 or "tagger” is granted access rights to the tagging platform 1 by tagging server 75 and empowered to annotate images 20, including the adding and (optionally) editing or deleting image tags 100.
  • the administrative and computational load of the tagging process is typically borne almost entirely by the tagging server 75; a publisher subscribed to the tagging service need only run a few simple lines of JavaScript on their web server 10 to incorporate a feed from the tagging server 75 which will result in tagged images.
  • the incorporation of tags into images may be achieved by generating one or more superimposed layers, at least one of which may have the purpose of supporting the illusion that the image tag is part of the image. - classification
  • the process by which an authorised tagging user 55 identifies an item or image element 60 of image 20 involves selecting a swatch or sample data 65 of item 60 and/or providing classification data 68 regarding item 60.
  • the swatch selection process is described below; the classification makes use of structured cascade menus.
  • the matching process involves searching for an equivalent or near- equivalent image (or component of an image) to that tagged by the tagging user - effectively identifying an object equivalent (or similar) to that tagged in the image.
  • the matching process is performed by the tagging user, for example by the tagging user providing a hyperlink to an equivalent image on the web or stored in a database such as the tagging server database.
  • the tagging may be performed by the vendor, a supplier of objects such as those tagged in the image.
  • More advanced embodiments use a matching engine (typically a process running on the tagging server) to perform the matching without further intervention by the tagging user or vendor.
  • An initial match is made using the classification text, wherein the matching engine identifies those images stored in the tagging server database which have the same (or as similar as possible) text associated with them.
  • this image text has been provided with the image data to the tagging server from the vendor servers and stored in the tagging server database.
  • a further level of matching is performed using image properties, for example, the colour, shade and/or pattern.
  • image properties for example, the colour, shade and/or pattern.
  • the tagging user is provided a with "swatch picker" tool (for example, a resizable selection box or lasso) which can be used to select or highlight a portion of the image.
  • the tagging system takes a snapshot or swatch of the selected portion and uses this in the subsequent 'swatch matching', running a search process to crawl thorough the tagging server database and compare the swatch with those of images stored in the tagging server database.
  • vendors signed up to the tagging service may provide swatch data in the XML feed with the images of their products (as later displayed in the pop-up menus) and the swatches are pattern-matched with the feed.
  • a matching search is run each time a new tag is created, optionally also whenever a tag is updated or otherwise altered.
  • tagging system is administered by means of an administration module.
  • authorised publishers have access to "production line" showing in real-time which images currently tagged or available to be - and in some cases also to edit tags.
  • the production line is updated as tags are created and/or updated.
  • the publishers can direct tagging user traffic to only particular types of product item eg. sunglasses, and/or also enable selection of products from particular suppliers or vendors.
  • product item eg. sunglasses
  • the following embodiments relate to the tagging of fashion items, such as clothing and accessories, in photographs of celebrities; evidently, the principles described can be extended to other subject matter, for example the tagging of goods and even locations.
  • Figures 2 to 5 show examples of a first embodiment of the invention.
  • FIG. 2 shows a screenshot of a web page with an embedded tagged image 200.
  • Tagging service icon 210 present in the lower right corner of the image 200 indicates that the image is able to be tagged; when the user positions the cursor to hover over the image, various tags 220 appear on the image at locations corresponding to tagged image elements.
  • Figure 3 shows an example of a pop-up menu 230 of an image tag, activated when the user positions the cursor to hover over a tag.
  • information relevant to the image in this case, of a celebrity
  • the image element tagged in this case, an item of clothing or an accessory
  • the identity of the tagger may be presented to the user.
  • Such items of information may include one or more of the following:
  • Figure 4 shows an explanatory pop-up menu 240 or tool-tip, activated when the user positions the cursor to hover over the tagging service icon 210. In this example, the user is prompted to tag elements in the image that the user can identify.
  • Figure 5 shows a further explanatory pop-up menu, in this case prompting the user to log in or sign up to the tagging service.
  • Figures 6 to 14 show examples of a second embodiment of the invention.
  • Figure 6 shows a screenshot of a web page with an embedded tag-enabled image 300.
  • Tagging service icon 310 present in the upper right corner of the image 300 indicates that the image is capable of being tagged.
  • Figure 7 shows tags appearing on a tagged image; when the user positions the cursor to hover over the image, various tags 330, 332, 334 appear on the image at locations corresponding to tagged image elements.
  • Figure 8 shows an explanatory pop-up menu 340 or bubble, activated when the user positions the cursor to hover over the tagging service icon 310.
  • Figure 9 shows examples of a plurality of pop-up tag menus associated with different tagged image elements, triggered when the user positions the cursor to hover over the relevant tags _ (i) sunglasses 410
  • the pop-up tag menu presents one or more of the following:
  • a plurality of matching (or near-matching) items is displayed.
  • Some embodiments also offer filters within the pop-up tag menu (for example, as drop-down menu, sliders, check boxes), allowing for the user to select a desired subset of the presented items, for example by type, vendor, brand and/or price. For example, in some embodiments, the default from most-to-least matching order of the display of matching items may also be adjusted by the user.
  • Additional controls may be provided to the end user to enable customisation of the results for the current tag and any future tag they see.
  • location-based functionality can be introduced such as “Find more like this... near me” and filtering in order of distance from the user.
  • a "more like this” button may be provided in place of or in addition to the array of matching items; activating this function triggers the search for and display of further matching items.
  • This function may be in the form of a 'fuzzy' search, locating items which have some similarities to the tagged object - for example, potentially coordinating accessories which match by colour and/or pattern. This introduces a browsing element to the matching and results in a richer experience.
  • Figure 10 shows an example of a scroll bar 510 in a pop-up tag menu.
  • Figure 11 shows the selection of a matching item 520 in a pop-up tag menu, resulting in a display of further information (a "product dashboard") about the matched item, for example a description and a price.
  • a product dashboard further information
  • Many other items of information may also be provided, for example, availability, user ratings, vendor information etc.
  • Some items of information may be provided at other levels in the menu structure, for example as part of the array of matching items.
  • Figure 12 shows the selection of the "share” button 530, showing options for sharing item details via social media, in this case FaceBook or Twitter. This introduces aspects of a "shop and share" experience and further acts to maintain user dwell time on the publisher's web site.
  • FIG. 13 shows a further example of multiple tags 610, 612 in an image, in this case the tagging of multiple persons in an image.
  • Figure 14 shows an example of a pop-up tag menu with a "shop” option 520. This facilitates direct purchase of the identified item from a vendor.
  • a "shopping basket” facility is also provided to allow purchase of multiple items, potentially from different vendors. A complete shopping experience may therefore be provided within a tag.
  • One way of providing the tagging service on a web site is by means of incorporating a suitable browser-side software application or web widget. This has the advantage of requiring only minimal new code to be introduced on a web site that wishes to subscribe to the tagging service. Segments of the code may be run on the hosting website; alternatively, the widget may provide only basic functionality, with the main processing being performed on the tagging server.
  • the widget provides the following services:
  • Web sites are able to include the widget on their site so that they and/or their readers may tag images with information about the content of those images (e.g. who is in the photo, what they are wearing etc).
  • Tags appear only when a user's mouse cursor hovers over an image. When the mouse curser moves away from the image, the tags disappear.
  • Each item that has been tagged (a shirt, for example) has its own tag.
  • the application may be switched on or off whether or not customers need to be logged in to create and edit tags.
  • Settings and tagging rules may set centrally at the tagging server and applied to all sites where the widget is installed; alternatively, bespoke settings may be used such that different publisher sites can have different settings.
  • users are able to set up a 'Pro' account, which allows them to share in the revenue generated by the tags they create. Tags created by a 'Pro' user cannot be edited or deleted.
  • a copy of the images that have been tagged on third party sites is made, the images are displayed on the tagging website and the revenue generated by those images is shared with the third party.
  • the categories are pre-determined (i.e. clothing types). Drop-down menus and boxes are used to avoid spelling mistakes and to help with creating a clean database. However, there is also optionally a free text "keyword” field, and some of the categories like "name” and “location” are usually kept as free text. Optionally, some names may be auto-suggested (e.g. celebrity names).
  • a user can tag any image. o If an image has already been tagged, then the user can add information, but not delete, unless the user originally created the tag. o Via a "My Account" page on the tagging website, a tagging user is able to keep track of all the images they have tagged in the past and click a link to see those images on their source sites.
  • ⁇ readers may tag images subject to the publisher having editorial rights over what information the readers provide (for example, to screen against the posting of offensive content)
  • Advertisers may also be able to advertise on the widget
  • the tagging system described enables a tagging user to assist a web site user to "get the look" - to identify and locate suppliers of items matching those seen in web site images.
  • a tag is built according to a two-step process: categorisation - the tagging user is presented with a series of cascaded category choices to guide tag build-up eg. jumper - v-necked
  • swatch picking - the tagging user makes use of a "swatch picker" to highlight or select part of item to obtain a colour and/or pattern
  • the tagging server uses the categorisation and swatch data to pattern match within the XML data feed provided by signed- up retailers and find the most relevant or closest matching items.
  • the tagging server typically operates as a web server with a front-end primarily dealing with user interface aspects and a rear-end data processing engine.
  • Front-end The front-end - written in JavaScript, although any other suitable language may be used - facilitates the user journey, providing the user interface and governing tag creation and interaction.
  • the tagging system user interface presents an overlay on the tagged image, the illusion for the viewing user is that the tags associated with tagged image elements are embedded in the image.
  • the widget must not be obscured by other elements on the page or partially hidden by containing element bounds. Therefore the interactive elements of the widget float above the page content at the highest possible z-index. It is then up to the JavaScript to ensure that the user interface always appears in the correct place.
  • the solution is a function that continually polls the properties of all tagged images.
  • the polling function tests position, source URL, visibility and whether or not the image is on screen. These are split into primary and secondary tests based on their impact on performance - position and source URL are primary tests and are tested more frequently.
  • the frequency of primary tests, secondary tests, and a maximum tests cap is set based on which browser is used, as the process can require intensive processing older browsers have inferior performance. This allows for a clean degradation in performance for older browsers.
  • the program updates the display to accommodate the new values. Bottlenecking is avoided i.e. one cycle must complete before another is created.
  • Another process updates / hides / exposes the overlay as required, as described below.
  • the back-end written in Java, although any other suitable language may be used - maintains the tagging platform, provides data to and receives input from the user interaction with the front-end, and also undertakes the various searching and matching functions.
  • a key aspect of the tagging system is the integration of a pattern matching and analysis engine (LIRE) with a search platform (SOLR):
  • the LIRE (Lucene Image REtrieval) library http://www.semanticmetadata.net/lire/ provides a way to retrieve images and photos based on their colour and texture characteristics.
  • SOLR is an open source enterprise search platform (http://lucene.apache.org/solr/), in these examples modified with bespoke additions to optimised for searching by colour likeness and patterns
  • the product data feed can be represented as table: Item Type Style Colour Designer Material Pattern Price Image code
  • Each item in the vendor data feed has a link to a corresponding item image.
  • the initial SOL category search is directed at the relevant columns (attributes).
  • ⁇ LIRE is used as a subsequent filter to find the ones with the right shade of red (and pattern) by swatch.
  • categorisation logic is embedded in a hierarchical database of synonyms, comprising alternate names for items. This is to account for different retailers using different terms for same type of item. Also, sometimes the word order or context matters eg. "jeans jacket” not a type of “jeans” (trousers)
  • the administration widget in the browser has a marquee tool which is used to identify coordinates within an image on the webpage. These coordinates identify the area of the image which represents the best colour likeness to a product we would like to display. The coordinates are passed to the backend when a tag is saved.
  • the image is retrieved from the host website and loaded in to the image processing application. The image is then cropped to the dimensions of the polygon that was identified.
  • the image processing application uses the fire library (http://www.semanticmetadata.net/lireA to create a Color and Edge Directivity Descriptor (CEDD) representation of the image data. This description is added to the search index along with the tag data.
  • a second search index is created to contain all the product details for the advertisers. Each product image is downloaded from the advertiser and a CEDD representation created for each of the products.
  • the CEDD representation for the tagged image is passed to the product search index where LIRE is again used for ordering the results by the distance of the descriptors. The closer the distance of descriptors, the more similar is the area of the tagged image to the colour of the product.
  • Figures 15 show the data flow, for the colour matching.
  • Tag view flow 1. In the web browser, the tagged image is viewed and the tag displayed. Products are requested for the tag.
  • the ad lookup service retrieves the tag from the Tag SOLR instance.
  • the CEDD information is retrieved.
  • the CEDD information along with keywords is passed to the Feed SOLR instance.
  • the Feed SOLR instance has been customised with a new sort option to sort by CEDD distance.
  • the following script block should be present on any page that will have taggable images on it. It can be placed anywhere in the page, but it should be after any meta tags. It can be in the head or the body of the html.
  • Unwanted images can be disabled or re-enabled in situ or in the gallery page of the admin area.
  • Filtering (of ad sizes) can be overridden by using a taggstar identifier in the code as described above (i.e. class- 'taggstar" etc.)
  • Javascript methods can be used to control or update the widget via the global taggWidget object.
  • viewTag(tsRef, tagld) if (window. taggWidget) taggWidget . iewTag (document . getEleme ntByld ( 'mylmg' ) , 3) ; quitUI( DOMEIement, hidelnactiveUI )
  • any visible inactive Ul will also hide.
  • the Ul will be re-activated from a reset state if the user mouses over it. if (window. taggWidget) taggWidget . quitUI (document . getElemen tByld ( 'mylmg ') , true); quitAII( hidelnactiveUI )
  • any visible inactive Ul will also hide.
  • taggstar icon top corner
  • edit mode where you can add or edit tags.
  • a basic variant uses the same searching and swatch matching technology, only publishes without tags on the image but rather results in an item or product bar with thumbnail images of matching items (much like a film strip) below the 'tagged' image.
  • Variants of the described tagging system may also be integrated with mobile devices such as smartphones. These may allow for a user to take a picture of an object, tag it and via the tagging system be presented with similar items. The user may also add the item to their digital scrapbook or upload details to social media websites, such as FaceBook.
  • the tagging system may also provide the user with an augmented environment, whereby instead of taking a photo of the item, the user would merely point their mobile camera at an item and similar items would appear on the screen in real-time, without need to take a photo.
  • the tagging system may determine a shape, template or "silhouette" of the item and use that to find products from retailers that having a similar characteristic. For example, a "tulip" dress has a particular silhouette, which could be used to match to dresses of similar shape - even if they are not described as such.
  • a similar process may be used to assist the tagging user in the categorisation step.
  • the use of outlines also moves away from relying on vendor's description in the data feed.
  • Incorporation of shape detection into both the tagging process and the product lookup process to match on the shape of the products as they appear in the images can be developed further into a fully-automated image tagging system. This variant of image recognition is easier performed with objects distinct against a contrasting plain background, rather than say a red dress at red carpet event.
  • More advanced embodiments utilise edge detection to perform background subtraction, thereby determining the shape of objects.
  • a further enhancement uses the recognition of product outlines, design, distinguishing trademarks, barcodes or data matrix codes.
  • a yet more advanced version of the tagging system requires less human involvement in the tagging process.
  • a photo taken of an item by the user is analysed by the system, including consideration of any written caption that appears with the photo, as well as the content of the article in which the photo is embedded. Use of all of these data points helps deliver highly targeted products and in-image advertising.
  • An alternative embodiment no longer requires installation of a client by the publisher, but instead provides a browser-side app to allow a user to tag items on any third party website, even if they are not separately tag-enabled at the hosting site.
  • the in-browser app that generates a browser overlay allowing anyone to tag anything they see on the internet. This means tagging images on any web site irrespective of whether or not that web site has installed the tagging program. Those tags would only be visible to those who have the browser application / smartphone app installed.
  • cross-vendor purchase agreements may allow for one-stop purchasing - which may be run as part of the tagging service.
  • tags allow the tags to be switched between visible/invisible, allowing filtering by individual and/or group.
  • the 'following' of preferred tagging users in the manner similar to that of Twitter may also be enabled.
  • the tagging system described has applications to other subject areas other than clothing in fashion - for example, in travel or interior design.
  • Tagging of an object present in an image may be extended to the tagging of a place, such as a known place of interest. Rather than match to a swatch, latitude and longitude geo co-ordinates may be used. In some instances, this information may already be present in the image EXIF information (commonly embedded in mobile phone photos).
  • a user moving a cursor over a tagged image of a location reveals (for example by means of a pop-up window) offer deals, flights, hotels to that location (if known) else (if not known), to similar locations.
  • Elements of the described system can also be applied to say the tagging of household items in online lifestyle magazine, wherein a curser mouseover reveals a product bar with potential matching items, vendors and offers.
  • Non-commercial uses include cultural tagging, for example tagging paintings and other visual artistic works, their identification from categorisation and/or location data and swatches, and the suggestion further relevant items. Extension of the tagging system into different subject areas will require an appropriate expansion of the categorising system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Apparatus for interactively tagging an image element corresponding to a selected item, comprising: means for receiving known items data in respect of a plurality of known items; means for receiving tagging data relating to the image element; means for determining from the tagging data and the known items data a known item matching the selected item; means for forwarding an interactive data element tag relating to the matching known item for association with the image element; wherein the means for determining comprises means for comparing swatch data. The use of swatch data may allow for improved accuracy and greater speed in item determination - as well as provide a range of potential matching items.

Description

Interactive image tagging
This invention relates to apparatus and methods for interactively tagging an image element corresponding to a selected item. In particular, the invention allows for the interactive tagging of on-line images. Variants of the invention allow for the selection and categorisation of image components, including searching for and matching to equivalent or near-equivalent images or components of images stored in a database. Embodiments of the invention also describe methods of allowing persistent tagging of images, avoiding overlay collision and enhanced matching techniques. The invention has general relevance in the field of image augmentation and machine intelligence.
Traditionally, online images, such as those of a web page displayed by a web browser, have been flat, one-dimensional objects. Typically, information pertaining to the image is provided alongside the image in the form of hyperlinked clickable text. This has the drawback of resulting in screen clutter. However, despite the development of various interactive types of screen image, the vast majority of images found online remain passive. There is a perceived need for ways that images may be made interactive or otherwise augmented. Furthermore, although word-based search technology is well established, image- based search is still in relative infancy. There is considerable interest in ways which might enable users to search for items based on their visual similarity.
The present invention aims to address at least some of these problems.
According to an aspect of the invention, there is provided apparatus (preferably a computer server) for interactively tagging an image element corresponding to a selected item, comprising: means (such as a first computer network interface) for receiving known items data in respect of a plurality of known items; means (such as a second computer network interface) for receiving tagging data relating to the image element; means (such as a processor) for determining from the tagging data and the known items data a known item matching the selected item; means (such as a third computer network interface) for forwarding an interactive data element tag relating to the matching known item for association with the image element; wherein the means for determining comprises means for comparing swatch data. The use of swatch data in determining a matching item from the items described by the item data may allow for improved accuracy and greater speed in item determination - as well as provide a range of potential matching items. Preferably, the tagging data comprises selected item swatch data determined from the image element. Swatch data is preferably representative of the image element and may reflect typical characteristics of the image element (and hence the corresponding selected item) such as for example colour, shading, texture, and/or material properties.
Preferably, the apparatus further comprises means for determining swatch data from the tagging data.
Preferably, the known item data comprises known item images.
Preferably, the apparatus further comprises means for determining known item swatch data from the known item images.
Preferably, the tagging data comprises classification data classifying the selected item. Preferably, the means for determining is adapted to determine a known item matching the selected item from a subset of the known items data determined by the classification data. This may speed item matching.
Preferably, the interactive data element tag is adapted to indicate that a known item has been determined for the image element corresponding to the selected item.
The interactive data element tag may be adapted to provide access to matching known item data. Preferably, interactive data element tag may be adapted to provide means for forwarding the matching known item data. Preferably, the interactive data element tag is adapted to provide means for interacting with social media.
Preferably, the known items data is received from at least one item supplier and the interactive data element tag is adapted to provide means for requesting a matching known item from the item supplier. Preferably, the means for determining a known item matching the selected item comprises: means for processing the image element of the selected item to generate a text representation of the selected item; and means for searching the known items data for known items matching the selected item in dependence on the text representation.
Preferably, the apparatus further comprises means for processing the known item data to generate a text representation of the known item from the known item images.
Preferably, the apparatus further comprises means for maintaining the association of an interactive data element tag on a means of display with the corresponding image element, comprising: means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
Preferably, the apparatus further comprises means for consistently displaying an interactive data element tag with the corresponding image element on a means of display, comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected. Preferably, the apparatus is adapted to receive tagging data relating to the image element from a (first) client device.
Preferably, the apparatus is adapted to forward the interactive data element tag to a (second) client device. This process may occur directly or indirectly.
According to another aspect of the invention there is provided apparatus for determining from data relating to an image element corresponding to a selected item and known items data a known item matching the selected item, the apparatus comprising: means for processing the image element to generate a text representation; and means for searching the known items data for known items matching the selected item in dependence on the text representation. By generating a text representation from an image, a text-based search may be used to match images, which may provide speed and simplicity.
Preferably, the known item data comprises known item images, and further comprising means for processing the known item data to generate a text representation of the known item from the known image images.
Preferably, the apparatus further comprises means for receiving image element location data; and means for retrieving the image element.
Preferably, the apparatus further comprises means for sorting the matching known items in dependence on the matching closeness.
Preferably, the means for generating a text representation comprises execution of a Color and Edge Directivity Descriptor algorithm.
According to a further aspect of the invention there is provided apparatus for maintaining the association of an interactive data element tag on a means of display with a corresponding image element, comprising, means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element. Preferably, the polled properties comprise at least one of: position, source URL, visibility and presence.
Preferably, the apparatus is adapted to poll properties with a frequency in dependence on their impact on performance. At least one of position and source URL may be polled more frequently than other properties. Preferably, the frequency of polling is determined according to a property of the means of display (for example, of the browser at the client device). The maximum frequency of polling may be determined according to a property of the means of display. Preferably, one cycle of polling of properties is completed before a subsequent cycle begins.
According to another aspect of the invention, there is provided apparatus for consistently displaying an interactive data element tag with a corresponding image element on a means of display, comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected.
Further features of the invention include client devices:
a client device adapted to transmit tagging data relating to an image element to a tagging apparatus (for example, a computer server); preferably, wherein the client device further comprises means for determining swatch data from the image element.
a client device adapted to receive an interactive data element tag relating to a known item for association with an image element. Preferably, the client device further comprises means (such as an applet or brower plug-in) for maintaining the association of the interactive data element tag on a means of display with the corresponding image element, comprising: means for polling properties of the interactive data element tag and the corresponding image element; means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element. More preferably, the client device further comprises means for consistently displaying the interactive data element tag with the corresponding image element on a means of display, comprising: means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; means for detecting a user interaction with the interactive data element tag; and means for hiding the dummy layer duplicate when a user interaction is detected.
According to another aspect of the invention, there is provided a method of interactively tagging an image element corresponding to a selected item, comprising: receiving known items data in respect of a plurality of known items; receiving tagging data relating to the image element; determining from the tagging data and the known items data a known item matching the selected item; and forwarding an interactive data element tag relating to the matching known item for association with the image element; wherein the means for determining comprises means for comparing swatch data. According to another aspect of the invention there is provided a method of determining from data relating to an image element corresponding to a selected item and known items data a known item matching the selected item, the method comprising: processing the image element to generate a text representation; and searching the known items data for known items matching the selected item in dependence on the text representation.
According to a further aspect of the invention there is provided a method of maintaining the association of an interactive data element tag on a means of display with a corresponding image element, comprising: polling properties of the interactive data element tag and the corresponding image element; determining changes in the polled properties; and adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element. According to another aspect of the invention, there is provided a method of consistently displaying an interactive data element tag with a corresponding image element on a means of display, comprising: generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; detecting a user interaction with the interactive data element tag; and hiding the dummy layer duplicate when a user interaction is detected.
Further features of the invention include methods executable at client devices:
a method of transmitting tagging data relating to an image element to tagging apparatus (such as a computer server); preferably, wherein the method further comprises determining swatch data from the image element.
a method of receiving an interactive data element tag relating to a known item for association with an image element. Preferably, the method further comprises maintaining the association of the interactive data element tag on a means of display with the corresponding image element, comprising: polling properties of the interactive data element tag and the corresponding image element; determining changes in the polled properties; and adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element. More preferably, the method further comprises consistently displaying the interactive data element tag with the corresponding image element on a means of display, comprising: generating a dummy layer duplicate of the interactive data element tag at the page level of the image element; detecting a user interaction with the interactive data element tag; and hiding the dummy layer duplicate when a user interaction is detected. Further features of the invention are characterised by the dependent claims.
The invention also provides a computer program and a computer program product for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
The invention also provides a signal embodying a computer program for carrying out any of the methods described herein, and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out the methods described herein and/or for embodying any of the apparatus features described herein. The invention extends to methods and/or apparatus substantially as herein described with reference to the accompanying drawings.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied apparatus aspects, and vice versa.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
The invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:
Figure 1 shows in overview an online image tagging system;
Figures 2 to 5 show example screenshots from a first embodiment of an online image tagging system;
Figure 2 shows a screenshot of a web page with an embedded tagged image; Figure 3 shows an example of a pop-up menu of an image tag;
Figure 4 shows an explanatory pop-up menu;
Figure 5 shows a further explanatory pop-up menu;
Figures 6 to 14 show example screenshots from a second embodiment of an online image tagging system;
Figure 6 shows a screenshot of a web page with an embedded tag-enabled image; Figure 7 shows tags appearing on a tagged image;
Figure 8 shows an explanatory pop-up menu;
Figure 9 shows examples of a plurality of pop-up tag menus associated with different tagged image elements;
Figure 10 shows an example of a scroll bar in a pop-up tag menu;
Figure 11 shows the selection of a matching item in a pop-up tag menu;
Figure 12 shows the selection of the "share" button, showing options for sharing item details via social media;
Figure 13 shows a further example of multiple tags in an image;
Figure 14 shows an example of a pop-up tag menu with a "shop" option; and
Figures 15 show the data flow, for the colour matching.
System overview
Generally, in the following, suitable computer servers may run common operating systems such as the Windows systems provided by Microsoft Corporation, OS X provided by Apple, various Linux or Unix systems or any other suitable operating system. Suitable buffering, clustering and other load-balancing arrangements are used as appropriate in order to provide sufficient capacity when loads are high. The various servers and client devices are interconnected as described in a computer network using appropriate networking protocols.
Corresponding web server software may be used, with web interfaces and other code written in any suitable language including Java, PHP and JavaScript. A Microsoft .Net based stack may be used.
Suitable databases include ones based on SQL, for example as provided by Microsoft Corporation or those from Oracle or others. Figure 1 shows an overview of an online networked image tagging system 1. Publication server 10, for example a web sever, distributes content such as images 20 which are displayed, for example as a web page 30 in a web browser, on a user device 40 by user 45.
Web page 30 is also displayed on client device 50 of tagging user 55, who upon identifying or selecting an item or image element 60 of image 20, is enabled by online image tagging system 1 to select and transmit a swatch or sample data 65 of item 60 and/or provide classification data 68 regarding item 60 to tagging server 70.
Generally, a swatch is a representative sample of an image element corresponding to a selected item, with typical characteristics such as for example colour, shading, texture, and/or material properties. Tagging server 70 is in communication via network interfaces with vendor servers 80, 90, and receives from them data 88, 98 in the form of a data feed regarding items stored in their respective item databases 85, 95.
In some embodiments, this data 88, 98 is subsequently stored in tagging server database 75.
Upon receiving swatch data 65 and/or classification data 68 regarding item 60 from tagging user 55, tagging server 70 seeks to determine a match in the data 88, 98 from the data feed (or, in some embodiments, from its database 75).
If a successful match for item 60 is found, tagging server 70 supplies or forwards a tag 100 to a client device 40 which is superimposed on the image element for item 60 in image 20 and viewed by user 45. User 45 therefore sees image 20 with item 60 identified by a tag 100, and by selecting the tag 100 sends a request 110 for further information about item 60 - either as shown directly from the appropriate vendor server 90, or alternatively via tagging server 70. There are several advantages to such a tagging system, which assists in the determination of the tag data according to given parameters, rather than an alternative system which might require the tagging users to have to locate a matching item themselves. For example:
• accuracy - the tagging user may be unable to find a good match
· number of tags - with the overhead of having to locate a link to the tagged item removed, users are likely to tag more images (and items within them)
• choice - rather than a single matching item, a range of matching items may be provided (suitably modified by filters) The tagging system performs three core processes:
1. tagging
2. classification
3. matching
The tagging system offers control of some of these processes to the vendor systems by means of an administration module.
- tagging
The tagging of images 20 - or specific parts of images, relating to identifiable items 60 - provided online by a publisher may be performed by the publisher themselves or outsourced to an agency. Alternatively, image tagging may be undertaken by the public at large - so-called 'crowdsourced' tagging. An authorised user 55 or "tagger" is granted access rights to the tagging platform 1 by tagging server 75 and empowered to annotate images 20, including the adding and (optionally) editing or deleting image tags 100.
The administrative and computational load of the tagging process is typically borne almost entirely by the tagging server 75; a publisher subscribed to the tagging service need only run a few simple lines of JavaScript on their web server 10 to incorporate a feed from the tagging server 75 which will result in tagged images. As will be described below, the incorporation of tags into images may be achieved by generating one or more superimposed layers, at least one of which may have the purpose of supporting the illusion that the image tag is part of the image. - classification
The process by which an authorised tagging user 55 identifies an item or image element 60 of image 20 involves selecting a swatch or sample data 65 of item 60 and/or providing classification data 68 regarding item 60.
The swatch selection process is described below; the classification makes use of structured cascade menus.
A typical example of classifications, in this case for clothing and accessories, is as follows (note similar items may appear at several different places in the lists):
Main categories:
• Who they are
o Name
o Occupation
Actors; Writers; Musicians/Singers; Chefs; Models;
Fashion Designers; Sports; Business people; Politicians; Architect
• What they are wearing
o Item type (dress shoes, etc)
o Designer / label
o Colour
• Where they wore it
o Location
o Event
o Date
• Where to get it
o Advertisement links
Clothing categories:
• Dresses
o Day / Night; Long sleeve; Off shoulder; One shoulder; Short sleeve; Sleeveless; Strappy; Bandeau
• Tops
o Camis & Vests; Bodies & Corsets; Cropped; Tunics; Shirts;
Blouses; T-shirts; Sweats & Hoodies; Cardigans & Shrugs; Basic tops; Day tops; Going out tops; Shirts & Blouses; Soft jackets and waistcoats; tunics; Jersey
• Knitwear
o Short cardigans; Long cardigans; Jumpers; Sleeveless jumpers;
Knitted dresses; Shrug; Tunic
• Jackets, Coats and Blazers
o Trophy jackets; Biker jackets; Bomber jackets; Leather and Pu jackets; Denim jackets; Faux fur jackets
o Cropped; Denim; Fur; Leather; Trenches & Maca; Parkas & ponchos; Short coats; Duffle coats; Parka; Sleeveless; Winter coat; Winter jacket
o Blazers; Waistcoats & Gilets
• Jeans
o Straight; Skinny; Bootcut; Boyfriend; Jeggings; Denim shorts;
Wide leg
• Trousers & Shorts
o Shorts; Trousers - Smart; Trousers - casual ; Skinny; Tapered;
Hotpants; Mid and knee-length; Hareem; reggings; Joggers; Cropped
· Skirts
o Bodycon; Denim; Tulip; Pencil; Full; Frilled
• Leggings
o Basic; Fashion; Jeggings
• Jumpsuits; Playsuits
· Work wardrobe
• Shoes
o Heels; Platforms; Wedges; Low and mid heels; High heels;
Sandals (heeled)
o Boots; Shoe boots; Lace-up boots; High leg boots; Flat boots; Heeled boots
o Flats; Sandals (flat); Ballet pumps; Canvas pumps; Brogues; Trainers; Flipflops
• Work wardrobe
• Work wardrobe
· Accessories o Belts; Scarves; Hats & headscarves; Gloves; Hair accessories;
Umbrellas; Party accessories; Shapewear
o Shoe accessories; Hosiery; Socks
• Sunglasses
o Oversized; Aviators; Flat top; Novelty
• Bags & Purses
o Leather; Non-Leather; Clutch; Cross body; Shoulder; Satchel;
Shopper; Luggage; Purses & charms; Hand held; Holdall;
Oversize
· Jewellery
o Necklaces; Bracelets; Earings; Rings; Brooches
• Tights & Socks
o Ankle socks
o Knee high socks; Tights; Footless tights; Leg warmers
· Lingerie & Nightwear
• Bras; Thongs; Minis; Girls boxers and shorties; High waisted knickers; Bodies; Nightwear; Slippers
• Swimwear
• Separates; Bikini; Swimsuits; Coverups - matching
Generally, the matching process involves searching for an equivalent or near- equivalent image (or component of an image) to that tagged by the tagging user - effectively identifying an object equivalent (or similar) to that tagged in the image. In some basic embodiments, the matching process is performed by the tagging user, for example by the tagging user providing a hyperlink to an equivalent image on the web or stored in a database such as the tagging server database.
Alternatively, the tagging may be performed by the vendor, a supplier of objects such as those tagged in the image.
More advanced embodiments use a matching engine (typically a process running on the tagging server) to perform the matching without further intervention by the tagging user or vendor. An initial match is made using the classification text, wherein the matching engine identifies those images stored in the tagging server database which have the same (or as similar as possible) text associated with them. Typically, this image text has been provided with the image data to the tagging server from the vendor servers and stored in the tagging server database.
In addition to matching by classification, a further level of matching is performed using image properties, for example, the colour, shade and/or pattern. To facilitate this, the tagging user is provided a with "swatch picker" tool (for example, a resizable selection box or lasso) which can be used to select or highlight a portion of the image. The tagging system takes a snapshot or swatch of the selected portion and uses this in the subsequent 'swatch matching', running a search process to crawl thorough the tagging server database and compare the swatch with those of images stored in the tagging server database.
Alternatively, vendors signed up to the tagging service may provide swatch data in the XML feed with the images of their products (as later displayed in the pop-up menus) and the swatches are pattern-matched with the feed.
Most vendors use broadly similar standard data structure for products; the tagging service therefore uses a data structure to match to this.
A matching search is run each time a new tag is created, optionally also whenever a tag is updated or otherwise altered.
- administration module
Various aspects of the tagging system are administered by means of an administration module. In this way, authorised publishers have access to "production line" showing in real-time which images currently tagged or available to be - and in some cases also to edit tags. The production line is updated as tags are created and/or updated.
The publishers can direct tagging user traffic to only particular types of product item eg. sunglasses, and/or also enable selection of products from particular suppliers or vendors. The following embodiments relate to the tagging of fashion items, such as clothing and accessories, in photographs of celebrities; evidently, the principles described can be extended to other subject matter, for example the tagging of goods and even locations. First embodiment
Figures 2 to 5 show examples of a first embodiment of the invention.
Figure 2 shows a screenshot of a web page with an embedded tagged image 200. Tagging service icon 210 present in the lower right corner of the image 200 indicates that the image is able to be tagged; when the user positions the cursor to hover over the image, various tags 220 appear on the image at locations corresponding to tagged image elements.
Figure 3 shows an example of a pop-up menu 230 of an image tag, activated when the user positions the cursor to hover over a tag. In this example, information relevant to the image (in this case, of a celebrity), the image element tagged (in this case, an item of clothing or an accessory) and/or the identity of the tagger, may be presented to the user. Such items of information may include one or more of the following:
• Who - Identity of the celebrity tagged
• What - Descriptor of the celebrity
• Gender - Whether the celebrity is male or female
• Dress - Descriptor of the item of clothing tagged
· Rating - Assessment of the item, based on the votes of multiple taggers
• Location - Place at which the image was taken
• Where - Event at which the image was taken
• When - Date at which the image was taken
• Tagger - Identity of the tagger
Further information that may be presented includes identifiers such as logos of vendors of the tagged item and/or links to them. Figure 4 shows an explanatory pop-up menu 240 or tool-tip, activated when the user positions the cursor to hover over the tagging service icon 210. In this example, the user is prompted to tag elements in the image that the user can identify. Figure 5 shows a further explanatory pop-up menu, in this case prompting the user to log in or sign up to the tagging service.
Second embodiment
Figures 6 to 14 show examples of a second embodiment of the invention. Figure 6 shows a screenshot of a web page with an embedded tag-enabled image 300. Tagging service icon 310 present in the upper right corner of the image 300 indicates that the image is capable of being tagged.
Figure 7 shows tags appearing on a tagged image; when the user positions the cursor to hover over the image, various tags 330, 332, 334 appear on the image at locations corresponding to tagged image elements.
Figure 8 shows an explanatory pop-up menu 340 or bubble, activated when the user positions the cursor to hover over the tagging service icon 310.
Figure 9 shows examples of a plurality of pop-up tag menus associated with different tagged image elements, triggered when the user positions the cursor to hover over the relevant tags _ (i) sunglasses 410
(ii) top 412
(iii) jeans 414
The pop-up tag menu presents one or more of the following:
• an identifier term for the tagged item
• an array or grid of thumbnail images of images of matching items, identified as being identical or similar to the item tagged; a scroll bar or other navigational tool may be provided to enable the user to view all the items identified • further functional items, for example as shown, a "share" button to facilitate sharing of item details via social media
Typically, a plurality of matching (or near-matching) items is displayed.
Some embodiments also offer filters within the pop-up tag menu (for example, as drop-down menu, sliders, check boxes), allowing for the user to select a desired subset of the presented items, for example by type, vendor, brand and/or price. For example, in some embodiments, the default from most-to-least matching order of the display of matching items may also be adjusted by the user.
Additional controls may be provided to the end user to enable customisation of the results for the current tag and any future tag they see. When used on mobile devices such as smartphones, location-based functionality can be introduced such as "Find more like this... near me" and filtering in order of distance from the user.
A "more like this" button may be provided in place of or in addition to the array of matching items; activating this function triggers the search for and display of further matching items. This function may be in the form of a 'fuzzy' search, locating items which have some similarities to the tagged object - for example, potentially coordinating accessories which match by colour and/or pattern. This introduces a browsing element to the matching and results in a richer experience.
Figure 10 shows an example of a scroll bar 510 in a pop-up tag menu.
Figure 11 shows the selection of a matching item 520 in a pop-up tag menu, resulting in a display of further information (a "product dashboard") about the matched item, for example a description and a price. Many other items of information may also be provided, for example, availability, user ratings, vendor information etc. Some items of information may be provided at other levels in the menu structure, for example as part of the array of matching items. Figure 12 shows the selection of the "share" button 530, showing options for sharing item details via social media, in this case FaceBook or Twitter. This introduces aspects of a "shop and share" experience and further acts to maintain user dwell time on the publisher's web site.
In some embodiments, integration with bloggers is encouraged, for example by permitting free text commentary to authorised users and revenue-sharing. Typically, these high-powered users are restricted to commenting on the sanctioning publisher's pages only and/or in respect of particular products, say from a particular vendor. Figure 13 shows a further example of multiple tags 610, 612 in an image, in this case the tagging of multiple persons in an image.
Figure 14 shows an example of a pop-up tag menu with a "shop" option 520. This facilitates direct purchase of the identified item from a vendor. In some embodiments a "shopping basket" facility is also provided to allow purchase of multiple items, potentially from different vendors. A complete shopping experience may therefore be provided within a tag.
One way of providing the tagging service on a web site is by means of incorporating a suitable browser-side software application or web widget. This has the advantage of requiring only minimal new code to be introduced on a web site that wishes to subscribe to the tagging service. Segments of the code may be run on the hosting website; alternatively, the widget may provide only basic functionality, with the main processing being performed on the tagging server.
Typically, the widget provides the following services:
• Web sites are able to include the widget on their site so that they and/or their readers may tag images with information about the content of those images (e.g. who is in the photo, what they are wearing etc).
• Tags appear only when a user's mouse cursor hovers over an image. When the mouse curser moves away from the image, the tags disappear.
• When a web site publisher has installed the widget, a logo or symbol appears on widget-approved photos that indicates to the user that a photo has been made "taggable". • If a photo has been made taggable but has no tags on it, a message appears encouraging customers to tag the image, but only when the cursor hovers over the image itself.
• When an image has tags on it, the tags appear when the mouse cursor hovers over the image.
• Each item that has been tagged (a shirt, for example) has its own tag.
• In order to add or edit a tag, users much first enter "Edit mode". Once in edit mode, users can move the tag around the image.
• The application may be switched on or off whether or not customers need to be logged in to create and edit tags.
o If in the mode where users need to have logged in to create/edit tags, an option allows those users either to edit other peoples' tags, or only edit their own.
o If in the mode where users are allowed to edit other peoples' tags, then the person who wrote the original tag is optionally sent an email, letting them know that their tag has been altered. This gives that user the ability to go and see how their entry has been changed.
• Settings and tagging rules may set centrally at the tagging server and applied to all sites where the widget is installed; alternatively, bespoke settings may be used such that different publisher sites can have different settings.
• In some embodiments, users are able to set up a 'Pro' account, which allows them to share in the revenue generated by the tags they create. Tags created by a 'Pro' user cannot be edited or deleted.
• In some embodiments, a copy of the images that have been tagged on third party sites is made, the images are displayed on the tagging website and the revenue generated by those images is shared with the third party.
• When a user is creating a tag, the categories are pre-determined (i.e. clothing types). Drop-down menus and boxes are used to avoid spelling mistakes and to help with creating a clean database. However, there is also optionally a free text "keyword" field, and some of the categories like "name" and "location" are usually kept as free text. Optionally, some names may be auto-suggested (e.g. celebrity names).
• For the user of the tagging service:
o an account with the tagging service is not needed to see the tags and their content, only in order to tag or edit an image.
o Once logged in, a user can tag any image. o If an image has already been tagged, then the user can add information, but not delete, unless the user originally created the tag. o Via a "My Account" page on the tagging website, a tagging user is able to keep track of all the images they have tagged in the past and click a link to see those images on their source sites.
o When the mouse cursor hovers over an image, items in the image that have been tagged are evident so that the user can see what the information is as well as where to get it.
• For the publisher, this approach offers several advantages, including:
o a way to monetise images
o a tool that is easily activated or de-activated (and options easily toggled between) and does not interfere with other applications that may be running on the publishers webpages.
o control over who can tag images, such that, for example:
employees are able to tag images
readers may tag images subject to the publisher having editorial rights over what information the readers provide (for example, to screen against the posting of offensive content)
the publisher has the right to delete tags on certain images
when a user clicks on a paid link in the widget, the new page appears in a new window and does not lose the visitor to the other site.
• Advertisers may also be able to advertise on the widget
Further discussion of key aspects
Generally, the tagging system described enables a tagging user to assist a web site user to "get the look" - to identify and locate suppliers of items matching those seen in web site images.
When a tagging user selecting an image with a cursor, a tag is built according to a two-step process: categorisation - the tagging user is presented with a series of cascaded category choices to guide tag build-up eg. jumper - v-necked
swatch picking - the tagging user makes use of a "swatch picker" to highlight or select part of item to obtain a colour and/or pattern The tagging server then uses the categorisation and swatch data to pattern match within the XML data feed provided by signed- up retailers and find the most relevant or closest matching items.
The tagging server typically operates as a web server with a front-end primarily dealing with user interface aspects and a rear-end data processing engine.
Front-end The front-end - written in JavaScript, although any other suitable language may be used - facilitates the user journey, providing the user interface and governing tag creation and interaction.
The tagging system user interface presents an overlay on the tagged image, the illusion for the viewing user is that the tags associated with tagged image elements are embedded in the image.
The widget must not be obscured by other elements on the page or partially hidden by containing element bounds. Therefore the interactive elements of the widget float above the page content at the highest possible z-index. It is then up to the JavaScript to ensure that the user interface always appears in the correct place.
The key aspects which enable this are the use of polling and a dummy layer. - Polling
Many things can change the position or visibility of an image. For example, in a gallery the user moves to the next image, content before an image expands, etc.
The obvious way to deal with this would be to update the host program so that it notifies the system program of such an event, but the need for development work from the host is not desirable.
The solution is a function that continually polls the properties of all tagged images.
This requires continuously monitoring (polling) the assets being tagged, that is the position of the tagged images and the relative position of the overlay layer (Figure 1 ; 500) above the tagged image (Figure 1 ; 1000). This is especially the case for where the tagged image forms part of a (scrollable) gallery of images
The polling function tests position, source URL, visibility and whether or not the image is on screen. These are split into primary and secondary tests based on their impact on performance - position and source URL are primary tests and are tested more frequently.
The frequency of primary tests, secondary tests, and a maximum tests cap is set based on which browser is used, as the process can require intensive processing older browsers have inferior performance. This allows for a clean degradation in performance for older browsers.
If any of these properties appear to have changed, the program updates the display to accommodate the new values. Bottlenecking is avoided i.e. one cycle must complete before another is created.
This type of solution is not that widespread because front-end programmers are reluctant to create a constant process. It does however also offer the advantage of being platform agnostic.
Another process updates / hides / exposes the overlay as required,, as described below.
- Dummy layer
There is an undesirable side effect of superimposing the widget user interface (Ul) 'above' the page. Often the widget has constantly visible elements, or displays elements before the user interacts. If an image is obscured or outside a containing element's display area, the widget will seem to display erroneously. Also for example, a gallery may have all the images stacked one on top of the other - so all of the Ul elements would appear together in the same place.
This is solved by use of a dummy Ul. This is visually a duplicate of the interactive Ul, and is injected into the page at the same level as the image. Because it is next to the image in the flow of the document, it will display according to the same conditions. When the user interacts with the image, the dummy hides and the interactive Ul takes its place. An image can only be engaged with if it is visible, so no erroneous display should occur. Any display before or after interaction is in fact the dummy Ul.
Back end
The back-end - written in Java, although any other suitable language may be used - maintains the tagging platform, provides data to and receives input from the user interaction with the front-end, and also undertakes the various searching and matching functions.
A key aspect of the tagging system is the integration of a pattern matching and analysis engine (LIRE) with a search platform (SOLR):
• The LIRE (Lucene Image REtrieval) library (http://www.semanticmetadata.net/lire/) provides a way to retrieve images and photos based on their colour and texture characteristics.
• SOLR is an open source enterprise search platform (http://lucene.apache.org/solr/), in these examples modified with bespoke additions to optimised for searching by colour likeness and patterns
This allows for items to be matched according to a combination of colour/pattern capture and word searching according to the following pseudocode outline:
• Convert swatch tag to swatch text (LIRE)
• For each item in data feed:
• test item category data for match to tag category (SOLR)
• if category match, convert swatch of data feed item image to feed item text (LIRE)
compare feed item text to swatch text
if match, display
else discard
The product data feed can be represented as table: Item Type Style Colour Designer Material Pattern Price Image code
A skirt pencil red Westwood leather plain £500 [A]
B skirt pencil red Karan wool plain £300 [B]
Each item in the vendor data feed has a link to a corresponding item image.
The initial SOL category search is directed at the relevant columns (attributes).
So, for example, for a search for a matching red polo neck:
SOLR finds (all) "polo necks" by word
LIRE is used as a subsequent filter to find the ones with the right shade of red (and pattern) by swatch.
Generally, in respect of category matching, categorisation logic is embedded in a hierarchical database of synonyms, comprising alternate names for items. This is to account for different retailers using different terms for same type of item. Also, sometimes the word order or context matters eg. "jeans jacket" not a type of "jeans" (trousers)
If an exact match at a particular category level cannot be found, one level higher (broader) is considered instead. However, if the categorisation term is considered to be essential in defining the item, then the nearest matches with the same essential characteristic are provided. For example, if no suitable "dark blue kaftan robe" is found, the user is first presented with other "kaftan robes" rather than "dark blue robes".
- Image sampling to provide a similarity search The administration widget in the browser has a marquee tool which is used to identify coordinates within an image on the webpage. These coordinates identify the area of the image which represents the best colour likeness to a product we would like to display. The coordinates are passed to the backend when a tag is saved. The image is retrieved from the host website and loaded in to the image processing application. The image is then cropped to the dimensions of the polygon that was identified. The image processing application uses the lire library (http://www.semanticmetadata.net/lireA to create a Color and Edge Directivity Descriptor (CEDD) representation of the image data. This description is added to the search index along with the tag data. A second search index is created to contain all the product details for the advertisers. Each product image is downloaded from the advertiser and a CEDD representation created for each of the products.
When a tag is retrieved, the CEDD representation for the tagged image is passed to the product search index where LIRE is again used for ordering the results by the distance of the descriptors. The closer the distance of descriptors, the more similar is the area of the tagged image to the colour of the product.
Figures 15 show the data flow, for the colour matching.
Tagging flow (TF):
1. In the web browser, use the custom JavaScript marquee tool to identify a polygon representing the area of an image which best represents the colour of the item which you would like to find matching products.
2 Polygon co-ordinates and location of image are sent to the indexing service.
3 Indexing service:
a. downloads the image
b. crops the image to the size of the polygon
c. runs Color and Edge Directivity Descriptor algorithm to generate text representation
d. sends CEDD information plus tag information to SOLR
4 SOLR is customised with a new sort option to sort by CEDD distance
Tag view flow (TVF): 1. In the web browser, the tagged image is viewed and the tag displayed. Products are requested for the tag.
2. The ad lookup service retrieves the tag from the Tag SOLR instance. The CEDD information is retrieved.
3. The CEDD information along with keywords is passed to the Feed SOLR instance. The Feed SOLR instance has been customised with a new sort option to sort by CEDD distance.
4. The matching results, ordered by the CEDD distance, are returned to the browser. Implementation
The following sections describe some implementation aspects of the tagging system.
- Installation The following script block should be present on any page that will have taggable images on it. It can be placed anywhere in the page, but it should be after any meta tags. It can be in the head or the body of the html.
<!-- Start taggstar code -->
<script type=" tex j avascript ">
(function () { var site = "yourSiteName", //Account id
delayLoad = true, //Delays loading script until window . onload w=window, d=document; function TS() {var
s=d. createElement ( ' script ' ) ; s . type= ' text/j a ascript ' ;
s . async=true; s . src=w . location . protocol+ ' / /www . example_ser ver. com/widget/tsui . j s . php?site- ' +site;
d. getElementsByTagName ( 'head' ) [0] . appendChild (s) }if (delay
Load) {if (w. attachEvent)
w. attachEvent ( 'onload' , TS) ;else
w.addEventListener ( 'load' , TS, false) }else TS ()}) (); </script>
< ! -- End taggstar code — >
Where the example shows site = "yourSiteName", you should place the unique label assigned to your account.
Where the example shows delayLoad = true, this indicates that taggstar will not load until the rest of the page has finished loading.
If you want taggstar to load as soon as possible, change this to delayLoad = false. 1. Identifying individual images as 'taggable':
Keeping an image and its tags together is based on either the image src URL, or using a supplied id if the src might change.
These instructions apply to either an IMG tag, or any element that has a CSS background image.
If the image src URL will always be the same, the identifier "taggstar" can be placed in any of these locations:
A. - In the tag 'class' attribute
<img src=" images/photo . j pg" class="taggstar" />
If there is already a class (or classes), 'taggstar' is placed after a space - i.e. class- 'yourClass taggstar"
This can also be placed on any element that has a CSS background image. Our reference will be made using the full img src address, and img displayed dimensions. For example: ts_250x300_wwwyoursitecomimagesphotojpg
S. - In a query string on the src URL
<img src=" images/photo . j pg?taggstar " />
If there is already a query string, 'taggstar' is placed after "&" - i.e.
src="photo.jpg?yourparam= 1 &taggstar" This can also be placed in the CSS background URL for an element - i.e. background: url("photo.jpg?taggstar")
Our reference will be made using the full img src address, and img displayed dimensions. For example: ts_250x300_wwwyoursitecomimagesphotojpg
C. - Anywhere in the URL of the image src
<img
src="w w. yoursite . com/images/taggstar/photo . jpg" />
This could be a folder name, or within the image name iself - i.e.
images/photo_taggstar.jpg
This could also be part of a CSS background URL - i.e. background:
url("images/photo_taggstar.jpg")
Our reference will be made using the full img src address, and img displayed dimensions. For example:
ts_250x300_wwwyoursitecomimagestaggstarphotojpg
If the image src URL might change, an additional unique id can be supplied in either of these ways:
A. - In the tag 'class' attribute
<img src="images/photo . jpg" class="taggstarl23 " /> <!-- substitute 1234 for a unique id -->
If there is already a class (or classes), 'taggstarl 234' is placed after a space - i.e. class- 'yourClass taggstarl 234"
This can also be placed on any element that has a CSS background image. Our reference will be made using the id and file name, and img displayed dimensions. For example: ts_250x300_yourSiteld_1234_photojpg
B. - In a query string on the src URL
<img src="images/photo. jpg?taggstar=1234" /> <!-- substitute 1234 for a unique id --> If there is already a query string, 'taggstar' is placed after "&" - i.e. src="photo.jpg?yourparam=1 &taggstar=1234"
This can also be placed in a CSS background URL - i.e. background: url("photo.jpg?taggstar=1234")
Our reference will be made using the id and file name, and img displayed dimensions. For example: ts_250x300_yourSiteld_1234_photojpg
2. Using 'Auto detect photos' setting: This will tag-enable all images within configured dimensions, except standard ad sizes (for example, 468x60, 728x90, 234x60, 120x90, 120x600, 160x600, 300x250). Filtering can be overridden by using an identifier in the code.
This includes DIV tags with a CSS background image. (Other elements must use the other methods detailed above.)
There is the option not to show the taggstar Ul to users if there are no tags yet on an image. Admins can also exclude unwanted images (or re-enable) by right-clicking the taggstar button on the photo.
Unwanted images can be disabled or re-enabled in situ or in the gallery page of the admin area.
Filtering (of ad sizes) can be overridden by using a taggstar identifier in the code as described above (i.e. class- 'taggstar" etc.)
Our reference will be made using the full img src address, and img displayed dimensions. For example: ts_250x300_wwwyoursitecomimagesphotojpg
- API These Javascript methods can be used to control or update the widget via the global taggWidget object.
Please test for this object before using it, to avoid the possibility of JS errors if the widget is not present for any reason i.e. if(window.taggWidget){...} reparse()
Re-examines the page to enable tagging on images that were not present (or not identified as taggable) on load.
This will not reprocess images that are already tag enabled. if (window. taggWidget) tagg idget . reparse ( ) ; viewTag( DOMEIement, nthTag )
Displays a tag and it's balloon automatically.
Specific taggstar refs can also be passed in instead of these arguments i.e.
viewTag(tsRef, tagld) if (window. taggWidget) taggWidget . iewTag (document . getEleme ntByld ( 'mylmg' ) , 3) ; quitUI( DOMEIement, hidelnactiveUI )
Quits (and optionally completely hides) any taggstar Ul for a specified DOM element i.e. the img, div etc. which is tag-enabled.
If no second parameter is supplied, the Ul will just quit (which may still show elements depending on your config settings.)
If the second parameter is 'true', any visible inactive Ul will also hide.
The Ul will be re-activated from a reset state if the user mouses over it. if (window. taggWidget) taggWidget . quitUI (document . getElemen tByld ( 'mylmg ') , true); quitAII( hidelnactiveUI )
Quits all active taggstar Uls on the page i.e. balloons hide, any editing quits, tags hide (unless set to always visible) etc.
If no parameter is supplied, the Ul will just quit (which may still show elements depending on your config settings.)
If the parameter is set to 'true', any visible inactive Ul will also hide.
All Uls will be re-activated from a reset state if the user engages with them again. if (window. taggWidget) taggWidget . quitAll (true) ; suspend()
Quits all active taggstar Uls on the page and disables the widget until reactivated by reactivate(). if (window. taggWidget) taggWidget . suspend ( ) ; reactivate()
Reactivates the widget after suspend().
Also reveals any inactive Ul that should be visible (depending on config), so can be used after quitUI(true) for this purpose. if (window . taggWidget) taggWidget . reactivate ( ) ; - Help
Edit Mode
If you are not yet logged in to taggstar, right-clicking the taggstar icon (top corner) on a photo will prompt you to log in. After that, right-clicking the icon will take you into edit mode, where you can add or edit tags.
If you are an administrator, this will also cause any disabled photos to show a greyed out icon. Right-clicking a greyed out icon will show a prompt to re-enable the image or cancel.
To disable an image in this way, go into edit mode on that image, then click the icon again as if to leave edit mode. At the bottom of the exit edit mode balloon is the option to disable tagging for that image.
At any time you can also right-click an existing tag to enter edit mode. It has the same action as clicking the icon.
Sometimes this will be necessary, as the icon may be hidden by config settings.
Tagger bookmarklet
If the icon is hidden and there are no existing tags, then there will be no way to log in to taggstar on the page. This bookmarklet will add a parameter to the URL that forces the icon to show. javascript : function updateQsParam(url,paramsOb) {var fragStr="",fragPos=url.indexOf ("#") ;
if ( fragPos>-
1) { fragStr=url . substr (fragPos) ;url=url . substring (0, fragPo s) / } var separator=url . ndexOf ("?") !==-l?"&" : "?"; for (var k in paramsOb) {var re=new
RegExp (" ( [? I &] ) "+k+"=. *? (& |$) ", "i") , v=paramsOb [ k] ;if (url . match (re) ) url=url . replace (re, "$l"+k+"="+v+"$2" ) ;
else url=url+separator+k+"="+v; separator=" &"} return url + fragStr ; } document . location=updateQsParam (window. location+ "", {tslcon.-l})
Bookmark this link, or drag the link onto your browser's bookmarks toolbar.
Click the bookmark when on a page that has taggstar installed. The icon will be displayed on all tag-enabled images.
Debug bookmarklet
This bookmarklet will display information about the tagging service installation install on a page. javascript : w=window; d=document;
if ( !w. console) %7bvarc=d . createElement (%22div%22) ;
d . body . appendChi Id ( c ) ;
c. id=%22 jsConsole%22;
c. style. position=%22absolute%22; c. style. right=%220%22 ; c. style. top=%220%22;
c. style. width=%22240px%22;
c. style. overflow=%22scroll%22;
c. style. zlndex=%22214 483647122;
c. style. textAlign=%221eft%22;
c. style. padding=%2210px%22;
c. style. color=%22#fff Bookmark this link, or drag the link onto your browser's bookmarks toolbar.
Click the bookmark when on a page that has the tagging service installed. Installation data will be outputted to the javascript console.
If no console is present one will be created on-screen. If firebug does not display correctly, use the native web console or JS command line. j a ascript : w=window; d=document ; i f ( ! w . console ) { var
c=d. createElement ("div") ;d.body. appendChild (c) ;c.id="jsCo nsole" ; c . style .position=" absolute" ; c . style . right="0" ; c . st yle . top="0" ; c . style . width="2 0px" ; c. style . overflow="scrol 1"; c. style. zlndex="21 7483647" ; c . style. textAlign="left"; c •style . padding="10px" ; c . style . color="#fff" ; c . style . backgr oundColor="#000" ; w. console= {log: function (msg) {d. getElernen tByld (" jsConsole") . innerHTML+=msg+"<br/>" } } } ; if (w.taggWid get) { cf=taggWidget . config; for (op in
cf) if (op!="assetsPath"&&op!="httpRoot"&&op !="httpsRoot"&& op ! ="getTagsUrl"&&op ! ="getDataUrl"&&op ! ="adContentUrl"&&o p !="tagRatingUrl"&&op ! =" jQueryUrl" ) console . log (op+" :
"+(cf [op]===""?"-":cf [op] ) ) ; console. log (" ") ;if (typeof cf==" function" ) { console . log ( "Taggstar did not
initialise ! "); if ( taggWidget . unsupported) console . log ( "Brow ser not supported ! "); console . log ( "
" ) } if (taggWidget .proto) { tsObs=taggWidget .proto . tsObjects; for(tsRef in
tsObs ) {o=tsObs [ tsRef ] ; console . log (o . imgOb) ; console . log ( "t sRef: "+tsRef) ; console . log ( "Num tags:
"+o . tagsCount ( ) ) ; if (o . isDisabled) console . log ( ">> Img disabled! «"); console . log ("
")}}} else { console . log ( "Tagg widget not installed!")}
Caching
It is a good idea to clear the browser cache if things are not displaying as expected. If new tags or changes to tags are still not showing, it is probably because the server cached version is being seen. This may be checked by entering edit mode again. The cache is always circumvented in edit mode - all tags, balloons and ads are made up-to-date and will be present until the page is reloaded.
Troubleshooting checklist
If the user interface, or tags for an image are not behaving as expected, the following should be checked:
• Is the tagging service JS code on the page? Has the site id been inserted? i.e. site = "yourSiteName".
• Is a cached version of the page or the tags being viewed? (see 'Caching' above.)
• If using 'auto-detect', do the image dimensions fall within the criteria set in the admin panel? If the image is a ess background, 'auto-detect' will only include DIV elements. (All elements are included for other methods.)
• If marking images individually, has the method been inserted correcty? i.e. class- 'taggstar" or similar.
Also, a check should be made that the class attribute is not included twice as this may overwrite.
• If existing tags are not showing, is the img sre the same as it was previously? (not including http/https.)
Are the display dimensions the same as they were previously? (New dimensions will create a new entry)
• Has the image been 'disabled'? This can be checked by logging in as
administrator and entering edit mode on any image (see above). After that, disabled images will show a greyed out taggstar icon.
· If the image seems to be tag-enabled when it should not be, is the 'auto- detect' setting on in the config?
Or, is the tagging service keyword somewhere in the URL of the image? (This is a legitimate way of tag enabling.)
• If elements of the tagging service Ul are not showing, or not hiding, has this been set in the config options?
• Has the browser cache been cleared? Is javascript switched on in the
browser? Modifications and alternatives
The following may be provided independently or in appropriate combinations: Simplified variant
A basic variant uses the same searching and swatch matching technology, only publishes without tags on the image but rather results in an item or product bar with thumbnail images of matching items (much like a film strip) below the 'tagged' image.
Mobile
Variants of the described tagging system may also be integrated with mobile devices such as smartphones. These may allow for a user to take a picture of an object, tag it and via the tagging system be presented with similar items. The user may also add the item to their digital scrapbook or upload details to social media websites, such as FaceBook.
Augmented reality
The tagging system may also provide the user with an augmented environment, whereby instead of taking a photo of the item, the user would merely point their mobile camera at an item and similar items would appear on the screen in real-time, without need to take a photo.
Silhouette matching
When an item is tagged, the tagging system may determine a shape, template or "silhouette" of the item and use that to find products from retailers that having a similar characteristic. For example, a "tulip" dress has a particular silhouette, which could be used to match to dresses of similar shape - even if they are not described as such.
A similar process may be used to assist the tagging user in the categorisation step. By presenting the tagging user with suggested outlines or silhouettes there is no longer a reliance on the tagger knowing the correct categorisation term when categorising the tagged item. The use of outlines also moves away from relying on vendor's description in the data feed. Incorporation of shape detection into both the tagging process and the product lookup process to match on the shape of the products as they appear in the images can be developed further into a fully-automated image tagging system. This variant of image recognition is easier performed with objects distinct against a contrasting plain background, rather than say a red dress at red carpet event.
More advanced embodiments utilise edge detection to perform background subtraction, thereby determining the shape of objects. A further enhancement uses the recognition of product outlines, design, distinguishing trademarks, barcodes or data matrix codes.
There is the need in certain instances to avoid using the full product image so that, for example, if the image contains a model, the colour and other items the model is wearing are not considered in the colour comparison.
Contextual search
A yet more advanced version of the tagging system requires less human involvement in the tagging process. A photo taken of an item by the user is analysed by the system, including consideration of any written caption that appears with the photo, as well as the content of the article in which the photo is embedded. Use of all of these data points helps deliver highly targeted products and in-image advertising.
Social tagging
Many of the above embodiments allowed for the creation of tags only on websites which had installed the tagging system code. An alternative embodiment no longer requires installation of a client by the publisher, but instead provides a browser-side app to allow a user to tag items on any third party website, even if they are not separately tag-enabled at the hosting site.
The in-browser app that generates a browser overlay allowing anyone to tag anything they see on the internet. This means tagging images on any web site irrespective of whether or not that web site has installed the tagging program. Those tags would only be visible to those who have the browser application / smartphone app installed.
The ability to tag items on different images, even on different web pages on different web sites and their collation results in a "shopping basket" or "digital scrapbook" comprising all the items a user has tagged across different sites - not least because the system allows for "pin-point bookmarking" of individual items rather than a web page in general. Integration with various social media allows for social bookmarking of individual items or products, the sharing of a "digital scrapbook" amongst users, such as via FaceBook, as for example a "mood board" of matching item themes.
With a multi-vendor shopping basket extending across multiple sites, cross-vendor purchase agreements may allow for one-stop purchasing - which may be run as part of the tagging service.
Some embodiments allow the tags to be switched between visible/invisible, allowing filtering by individual and/or group. The 'following' of preferred tagging users in the manner similar to that of Twitter may also be enabled.
Other subject areas
The tagging system described has applications to other subject areas other than clothing in fashion - for example, in travel or interior design. Tagging of an object present in an image may be extended to the tagging of a place, such as a known place of interest. Rather than match to a swatch, latitude and longitude geo co-ordinates may be used. In some instances, this information may already be present in the image EXIF information (commonly embedded in mobile phone photos).
A user moving a cursor over a tagged image of a location reveals (for example by means of a pop-up window) offer deals, flights, hotels to that location (if known) else (if not known), to similar locations. Elements of the described system can also be applied to say the tagging of household items in online lifestyle magazine, wherein a curser mouseover reveals a product bar with potential matching items, vendors and offers.
Non-commercial uses include cultural tagging, for example tagging paintings and other visual artistic works, their identification from categorisation and/or location data and swatches, and the suggestion further relevant items. Extension of the tagging system into different subject areas will require an appropriate expansion of the categorising system.
Fully autonomous tagging
Further development of the technologies described can be seen to lead to the increasing automation of the tagging process, eventually to applications which can tag and identify objects within images entirely without the assistance of a tagging user. It will be understood that the invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.
Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or n any appropriate combination.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.

Claims

Claims
1. Apparatus for interactively tagging an image element corresponding to a selected item, comprising:
means for receiving known items data in respect of a plurality of known items;
means for receiving tagging data relating to the image element;
means for determining from the tagging data and the known items data a known item matching the selected item;
means for forwarding an interactive data element tag relating to the matching known item for association with the image element;
wherein the means for determining comprises means for comparing swatch data.
2. Apparatus according to claim 1 , wherein the tagging data comprises selected item swatch data determined from the image element.
3. Apparatus according to claim 1 , further comprising means for determining swatch data from the tagging data.
4. Apparatus according to any preceding claim, wherein the known item data comprises known item images.
5. Apparatus according to claim 4, further comprising means for determining known item swatch data from the known item images.
6. Apparatus according to any preceding claim, wherein the tagging data comprises classification data classifying the selected item.
7. Apparatus according to claim 6, wherein the means for determining is adapted to determine a known item matching the selected item from a subset of the known items data determined by the classification data.
8. Apparatus according to any preceding claim, wherein the interactive data element tag is adapted to indicate that a known item has been determined for the image element corresponding to the selected item.
9. Apparatus according to any preceding claim, wherein the interactive data element tag is adapted to provide access to matching known item data.
10. Apparatus according to any preceding claim, wherein the interactive data element tag is adapted to provide means for forwarding the matching known item data.
11. Apparatus according to any preceding claim, wherein the interactive data element tag is adapted to provide means for interacting with social media.
12. Apparatus according to any preceding claim, wherein the known items data is received from at least one item supplier and the interactive data element tag is adapted to provide means for requesting a matching known item from the item supplier.
13. Apparatus according to any preceding claim, wherein the means for determining a known item matching the selected item comprises:
means for processing the image element of the selected item to generate a text representation of the selected item; and
means for searching the known items data for known items matching the selected item in dependence on the text representation.
14. Apparatus according to any of claims 4 to 13, further comprising means for processing the known item data to generate a text representation of the known item from the known item images.
15. Apparatus for determining from data relating to an image element corresponding to a selected item and known items data a known item matching the selected item, the apparatus comprising:
means for processing the image element to generate a text representation; and
means for searching the known items data for known items matching the selected item in dependence on the text representation.
16. Apparatus according to claim 15, wherein the known item data comprises known item images, and further comprising means for processing the known item data to generate a text representation of the known item from the known image images.
17. Apparatus according to any of claims 13 to 16, further comprising:
means for receiving image element location data; and
means for retrieving the image element.
18. Apparatus according to any of claims 13 to 17, further comprising:
means for sorting the matching known items in dependence on the matching closeness.
19. Apparatus according to any of claims 13 to 18, wherein the means for generating a text representation comprises execution of a Color and Edge Directivity Descriptor algorithm.
20. Apparatus according to any preceding claim, further comprising means for maintaining the association of an interactive data element tag on a means of display with the corresponding image element, comprising:
means for polling properties of the interactive data element tag and the corresponding image element;
means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
21. Apparatus for maintaining the association of an interactive data element tag on a means of display with a corresponding image element, comprising:
means for polling properties of the interactive data element tag and the corresponding image element;
means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
22. Apparatus according to claim 20 or 21 , wherein the polled properties comprise at least one of: position, source URL, visibility and presence.
23. Apparatus according to any of claims 20 to 22, wherein the apparatus is adapted to poll properties with a frequency in dependence on their impact on performance.
24. Apparatus according to claim 23, wherein at least one of position and source
URL are polled more frequently than other properties.
25. Apparatus according to claim 23 or 24, wherein the frequency of polling is determined according to a property of the means of display.
26. Apparatus according to any of claims 23 to 25, wherein maximum frequency of polling is determined according to a property of the means of display.
27. Apparatus according to any of claims 23 to 25, wherein one cycle of polling of properties is completed before a subsequent cycle begins.
28. Apparatus according to any preceding claim, further comprising means for consistently displaying an interactive data element tag with the corresponding image element on a means of display, comprising.
means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element;
means for detecting a user interaction with the interactive data element tag; and
means for hiding the dummy layer duplicate when a user interaction is detected.
29. Apparatus for consistently displaying an interactive data element tag with a corresponding image element on a means of display, comprising:
means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element;
means for detecting a user interaction with the interactive data element tag; and
means for hiding the dummy layer duplicate when a user interaction is detected.
30. Apparatus according to any preceding claim, adapted to receive tagging data relating to the image element from a client device.
31. A client device adapted to transmit tagging data relating to an image element to the apparatus of any preceding claim.
32. A client device according to claim 31 , further comprising means for determining swatch data from the image element.
33. Apparatus according to any of claims 1 to 30, adapted to forward the interactive data element tag to a client device.
34. A client device adapted to receive an interactive data element tag relating to a known item for association with the image element.
35. A client device according to claim 34, further comprising means for maintaining the association of the interactive data element tag on a means of display with the corresponding image element, comprising:
means for polling properties of the interactive data element tag and the corresponding image element;
means for determining changes in the polled properties; and means for adjusting the properties of the interactive data element tag to maintain its association with the corresponding image element.
36. A client device according to claim 34 or 35, further comprising means for consistently displaying the interactive data element tag with the corresponding image element on a means of display, comprising:
means for generating a dummy layer duplicate of the interactive data element tag at the page level of the image element;
means for detecting a user interaction with the interactive data element tag; and
means for hiding the dummy layer duplicate when a user interaction is detected.
PCT/GB2011/001606 2011-11-15 2011-11-15 Interactive image tagging WO2013072647A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2011/001606 WO2013072647A1 (en) 2011-11-15 2011-11-15 Interactive image tagging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GB2011/001606 WO2013072647A1 (en) 2011-11-15 2011-11-15 Interactive image tagging

Publications (1)

Publication Number Publication Date
WO2013072647A1 true WO2013072647A1 (en) 2013-05-23

Family

ID=45470590

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2011/001606 WO2013072647A1 (en) 2011-11-15 2011-11-15 Interactive image tagging

Country Status (1)

Country Link
WO (1) WO2013072647A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITUA20163537A1 (en) * 2016-05-18 2017-11-18 Prisma Tech S R L SYSTEM AND METHOD IMPLEMENTED THROUGH IMAGE COLLECTION MANAGEMENT CALCULATOR
CN110019870A (en) * 2017-12-29 2019-07-16 浙江宇视科技有限公司 The image search method and system of image cluster based on memory
EP3736707A1 (en) * 2019-05-07 2020-11-11 Godzaridis, Elenie Techniques for concurrently editing fully connected large-scale multi-dimensional spatial data
US11227343B2 (en) * 2013-03-14 2022-01-18 Facebook, Inc. Method for selectively advertising items in an image
US20230316368A1 (en) * 2022-04-04 2023-10-05 Shopify Inc. Methods and systems for colour-based image analysis and search

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090319388A1 (en) * 2008-06-20 2009-12-24 Jian Yuan Image Capture for Purchases
US20100076867A1 (en) * 2008-08-08 2010-03-25 Nikon Corporation Search supporting system, search supporting method and search supporting program
US7856380B1 (en) * 2006-12-29 2010-12-21 Amazon Technologies, Inc. Method, medium, and system for creating a filtered image set of a product
US20110270697A1 (en) * 2010-04-28 2011-11-03 Verizon Patent And Licensing, Inc. Image-based product marketing systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7856380B1 (en) * 2006-12-29 2010-12-21 Amazon Technologies, Inc. Method, medium, and system for creating a filtered image set of a product
US20090319388A1 (en) * 2008-06-20 2009-12-24 Jian Yuan Image Capture for Purchases
US20100076867A1 (en) * 2008-08-08 2010-03-25 Nikon Corporation Search supporting system, search supporting method and search supporting program
US20110270697A1 (en) * 2010-04-28 2011-11-03 Verizon Patent And Licensing, Inc. Image-based product marketing systems and methods

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
REBECCA BRYANT: "Shop the look with Taggstar", 14 November 2011 (2011-11-14), XP055027888, Retrieved from the Internet <URL:http://style.uk.msn.com/fashion/shop-the-look-with-taggstar> [retrieved on 20120523] *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11227343B2 (en) * 2013-03-14 2022-01-18 Facebook, Inc. Method for selectively advertising items in an image
US20220148098A1 (en) * 2013-03-14 2022-05-12 Meta Platforms, Inc. Method for selectively advertising items in an image
ITUA20163537A1 (en) * 2016-05-18 2017-11-18 Prisma Tech S R L SYSTEM AND METHOD IMPLEMENTED THROUGH IMAGE COLLECTION MANAGEMENT CALCULATOR
CN110019870A (en) * 2017-12-29 2019-07-16 浙江宇视科技有限公司 The image search method and system of image cluster based on memory
EP3736707A1 (en) * 2019-05-07 2020-11-11 Godzaridis, Elenie Techniques for concurrently editing fully connected large-scale multi-dimensional spatial data
US10930087B2 (en) 2019-05-07 2021-02-23 Bentley Systems, Incorporated Techniques for concurrently editing fully connected large-scale multi-dimensional spatial data
US20230316368A1 (en) * 2022-04-04 2023-10-05 Shopify Inc. Methods and systems for colour-based image analysis and search
US11907992B2 (en) * 2022-04-04 2024-02-20 Shopify Inc. Methods and systems for colour-based image analysis and search

Similar Documents

Publication Publication Date Title
US10747826B2 (en) Interactive clothes searching in online stores
US20200159871A1 (en) Computer aided systems and methods for creating custom products
US9678989B2 (en) System and method for use of images with recognition analysis
US7542610B2 (en) System and method for use of images with recognition analysis
US7992082B2 (en) System and technique for editing and classifying documents
US20200342320A1 (en) Non-binary gender filter
WO2020085786A1 (en) Style recommendation method, device and computer program
KR20110081802A (en) System and method for using supplemental content items for search criteria for identifying other content items of interest
US20190012716A1 (en) Information processing device, information processing method, and information processing program
KR102043440B1 (en) Method and system for coordination searching based on coordination of a plurality of objects in image
WO2013072647A1 (en) Interactive image tagging
US11195227B2 (en) Visual search, discovery and attribution method, system, and computer program product
US10474919B2 (en) Method for determining and displaying products on an electronic display device
US11972466B2 (en) Computer storage media, method, and system for exploring and recommending matching products across categories
KR20160117678A (en) Product registration and recommendation method in curation commerce
KR102575382B1 (en) AI-based online clothing retail system
KR102062248B1 (en) Method for advertising releated commercial image by analyzing online news article image
US11645837B1 (en) System for constructing virtual closet and creating coordinated combination, and method therefor
WO2020138941A2 (en) Method for providing fashion item recommendation service to user by using swipe gesture
WO2007041647A2 (en) System and method for use of images with recognition analysis
CN113158061B (en) Data processing method and device
Lepage et al. LRVS-Fashion: Extending Visual Search with Referring Instructions
WO2022011426A1 (en) Content selection platform
Lepage et al. Simplifying Referred Visual Search with Conditional Contrastive Learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11807717

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11807717

Country of ref document: EP

Kind code of ref document: A1