WO2018232270A1 - Transportable marketing content overlaid on digital media - Google Patents

Transportable marketing content overlaid on digital media Download PDF

Info

Publication number
WO2018232270A1
WO2018232270A1 PCT/US2018/037797 US2018037797W WO2018232270A1 WO 2018232270 A1 WO2018232270 A1 WO 2018232270A1 US 2018037797 W US2018037797 W US 2018037797W WO 2018232270 A1 WO2018232270 A1 WO 2018232270A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
tag
user
customer
tagged
Prior art date
Application number
PCT/US2018/037797
Other languages
French (fr)
Inventor
Darren R. Elven
Eben O. JOHNSON
Original Assignee
Qzetta, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qzetta, Inc. filed Critical Qzetta, Inc.
Publication of WO2018232270A1 publication Critical patent/WO2018232270A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0277Online advertisement

Definitions

  • the disclosed technology is in the technical field of digital information. More particularly, the disclosed technology is in the technical field of embedding transportable marketing content on digital media.
  • digital marketing content is created based on identifying patterns in user browsing behaviors on webpages.
  • the marketing content is focused exclusively on a single web page viewed by an internet user. This makes it difficult to make the digital content portable across multiple content channels.
  • server side scripts are injected by a server to distribute marketing content in more than one web page.
  • the client web browser is subjected to extensive processing. As a result, user experience is impacted. This may lead a user to leave a web page.
  • the marketing content being delivered to a user browsing a webpage may not be related in context to the content of the webpage and not necessarily relevant to the user.
  • the disclosed technology are methods and systems for a customer to create marketing content that can be rendered (or overlaid) over digital content, without modifying the original content.
  • the digital content can be in the form of webpages, images, videos, audio/visual content, or any other digital media and creating the digital content involves tagging the digital content with additional data to generate tagged digital content.
  • tagged digital content includes adding or embedding data that describes one or more items included in a webpage or a video, e.g., a good/service or an object included in the original content.
  • a tagged item is a good/service or an object included in the original content integrated with additional data.
  • a tagged item allows the item to be browsed online or even searched using a search engine.
  • tagged information can be viewed regardless of where the content is being viewed from, or how it is being viewed.
  • tags and/or tagged content can be shared by a user with other users of the disclosed system.
  • tags and/or tagged content can provide information about the good/service/product to a person viewing the tagged image.
  • multiple goods/services/products can be compared by comparing the tags associated with the images of the goods/services/products.
  • a customer is a marketer who intends to advertise his or her goods and services by creating marketing content using embodiments of the disclosed technology.
  • a user is a consumer who views or reviews (e.g., text, audio, images, video, or a combination thereof) the marketing content created by the customer.
  • a customer is a brand marketing manager for a sports gear manufacturer who is creating marketing content by tagging images associated with a golf club.
  • the images that can be tagged do not necessarily have to be limited to the images of goods offered by the customer (e.g., the sports gear manufacturer) and can be proprietary images or freely available online.
  • images can be provided (by the disclosed system) to the customer/marketer/brand manufacturer.
  • images can be provided by a customer for being tagged by the disclosed system.
  • a user/consumer can be a person who is interested in the golf club or otherwise other goods of the company brand and is a potential purchaser of the golf club.
  • the marketing content created by a customer is portable across distribution channels.
  • the disclosed system employs one or more hashing technologies to identify untagged copies of the original content and allows pairing the original content with the tag(s) provided by the customer.
  • the advantages of the disclosed technology include, without limitation, that the tagged content is portable for rendering across different distribution channels.
  • a photograph is taken during a celebrity event, the photograph of a celebrity wearing a particular shade of lipstick.
  • the photograph is first distributed through a news channel website but subsequently distributed as part of original content through a content distribution network and multiple other websites.
  • the customer marketing team identifies that the celebrity is wearing their brand of lipstick and tags the image with the details of the goods and adds extra marketing content.
  • Any user of the disclosed system can see the tagged content if the celebrity image is distributed via the news channel website, a fan blog website, the lipstick brand website, the celebrity event website, or other websites.
  • the same tagged content is rendered whether the user is using a browser based technology or mobile application, or other form of user interface using the technology as disclosed herein.
  • a user may be in a physical location and interacting with a digital interaction point (e.g. , an interactive digital signage), where the same celebrity image is being rendered.
  • a digital interaction point e.g. , an interactive digital signage
  • the user is able to click upon the tag and review detailed information about the goods.
  • Potential usage scenarios for this embodiment include, but are not limited to, interactive shopping displays, digital signage, digital concierge, etc.
  • the disclosed technology can display tags along with goods availability information for that store location.
  • the disclosed system generates digital fingerprints associated with original content.
  • the digital fingerprints can help in identifying the original content regardless of where the content is rendered or distributed.
  • the system can generate metadata derived from user interactions with the tagged content (e.g., data analytics and user statistics in connection with user preferences for goods/services).
  • the analytics collected can be provided to the same or other customers for assistance in creating marketing content.
  • FIG. 1 illustrates an exemplary architecture of a content tagging and tag rendering system.
  • Figs. 2A and 2B are examples of interfaces displayed to a user who is viewing tagged content.
  • Fig. 3 is an example of an interface displayed to a customer who is creating marketing content.
  • Fig. 4 is a flowchart from the perspective of a server that generates the marketing content.
  • Fig. 5 is a process flow showing steps taken by a customer for creating tagged digital content.
  • Fig. 6 shows example data stored in a database associated with tagged information for multiple customers.
  • Fig. 7 is a screenshot of a tag placement editor for placing tags on an image.
  • Fig. 8 is a screenshot showing co-ordinates of a tag with respect to an image included in an original content.
  • Fig. 9 is a block diagram showing portable tags and tag content rendering upon original content across multiple distributions.
  • Fig. 10 is a screenshot of a dashboard associated with tagging an example image with tags.
  • Fig. 1 1 is an example screenshot of a user interface showing a tagged image showing a search functionality.
  • Fig. 12 is an example screenshot of a user interface showing a tagged image showing a scrolling wheel functionality.
  • Fig. 13 is a flowchart showing steps by a computer for generating a report in connection with managing digital rights.
  • Fig. 1 illustrates an exemplary architecture of a digital content system 100.
  • Digital content system 100 includes content management services 160 connected to user interfaces 101 a, 101 b, and 101 c via network 102.
  • Content management services 160 are also connected to content database 150 that stores one or more of digital content (such as videos and images), fingerprints (e.g., hashes) of images, and tagged content.
  • User interfaces 101 a, 101 b, and 101 c are coupled to user computing devices such as desktop computers, laptop computers, mobile phones, consumer wearable devices, and tablet computers.
  • the content management services 160 include a customer interface 1 10, tag API 120, tag manager 130, and content fingerprinting 140.
  • Customer interface 1 10 is a graphical user interface (e.g., web browser, desktop or mobile) which communicates with content manager application service 1 1 1 .
  • Content manager application service 1 1 1 provides the application services to create, manage, and remove any content associated with a tag for a customer.
  • Tag API 120 is a cloud-based service-oriented architecture providing an application programming interface for application service components provided by content manager application service 1 1 1 .
  • Tag API 120 includes request processor 121 which provides the application services and logic to respond to and route service requests and application events.
  • Tag manager application service 130 provides the application services to create, manage, merge, update, and remove tags and tag content data structures.
  • Tag manager application service 130 includes tag extractor application service 131 .
  • Tag extractor application service 131 provides the functionality to search and retrieve content database 150 for tags and return the tag data in a suitable format.
  • Content Fingerprinting 140 provides application services to support the creation, management, audit, and removal of content fingerprint data structures.
  • Content Fingerprinting 140 includes fingerprint generator 141 which provides application services for the creation of unique hash fingerprints to various content types such as, but not limited to, digital media, text content, video, or audio.
  • Figs. 2A and 2B are examples of interfaces 200A and 200B displayed to a user (e.g., a consumer) who is viewing tagged content.
  • the user interface 101 in Fig. 2A displays original content 201 (e.g., an image), tags 203 numbered in the image as tag #1 , tag #2, and tag #3, a tag content bar 202, a tag count 204, and a hide tags button 205 (for toggling the tags on and off).
  • original content generally refers to any kind of digital media (e.g., images and, videos, audio, text, and HTML content).
  • tags are associated with objects or content included in images, text, and videos.
  • Objects or content included in images and videos can be various goods or services provided by one or more customers (marketers).
  • tag # 1 is in connection with a pair of sunglasses included in original content 201 .
  • Tag #2 is in connection with a table on which the sunglasses are placed.
  • Tag #3 is in connection with a drink placed on the table and in close proximity to the sunglasses.
  • Tags 203 can provide information about goods/services, the brand, manufacturer, price, stores/retail locations where the objects are available, and other such information.
  • the tag content bar 202 is invisible. In some embodiments, the tag content bar 202 can be made visible.
  • Tag content bar 202 provides a user with abilities to access personal preference settings, such as but not limited to, tag color preference, tag visibility preference, the number of tags on the original content, different categories of tags, one or more brands of preference, one or more alerts/notifications about the product/service.
  • tag content bar 202 enables a user to access a search interface to search saved tags or shared tags.
  • Settings 206 allow a user to access their personal information profile, modify email address, contact preferences, and view social affiliations/aspects of tags.
  • Fig. 2B also indicates tag content rendering menu 301 .
  • tag content rendering menu 301 displays content A (e.g., a brand of a pair of sunglasses), content B (e.g., a model name of a pair of sunglasses), content C (e.g., available sizes and color variations), and content D (e.g., rich media, and other supporting materials) associated with tag #1 .
  • a tag can be associated with one or more digital content overlaid on an original (untagged) image for being rendered to a user via a user interface.
  • a tag identifier can be a logo, e.g., of a brand.
  • a tag identifier can be a "Swoosh" logo of the NIKE® brand.
  • tags can be colored differently, designed, provide information about sale/clearances, or otherwise customized in accordance with a marketer or a brand's specifications.
  • the customization can include the system providing recommendations targeted to a user who is interacting with a tagged image based on monitoring his or her browsing habits/behavior. Based on monitoring the browsing habits/behavior, the system can suggest alternate products from the same brand, products from different brands, or any other suitable product/brand based on correlating the user's browsing behavior with browsing behaviors of other end users who have browsed the product.
  • educators may tag online images with educational content (such as text, documents, videos, links to relevant articles, student assignments, quiz questions, etc.).
  • tags can be seen only on images for registered users, e.g., students who have signed up for an instructor's class.
  • An instructor can provide a unique identifier (password) to his/her students.
  • the system retrieves the relevant tagged images for display to the student.
  • Fig. 3 is an example of an interface 300 displayed to a customer who is creating marketing content for being delivered to end users.
  • Interface 300 includes address bar displaying URI 301 of the original content.
  • URI 301 is a Uniform Resource Indicator and for example in Fig. 3 is depicted as a web location that stores original content 201 .
  • the URI 301 can correspond to an image stored at a third-party image repository or a content provider such as Getty Images®.
  • a customer can place tags on an image using a tag placement editor 401 .
  • the customer creates tag content 402 by entering information, e.g., the customer can enter URI 301 of the original content in customer interface 1 10.
  • Tag content 402 includes original content (e.g., a file name and a URI), a tag identifier (e.g., tag #1 or otherwise, a system-generated identifier), co-ordinates of the tag with respect to the original content, and the content (e.g., Content A, Content B, Content C, and Content D) that is to be included as tags.
  • the disclosed system automatically generates metadata (corresponding to the tags) which is stored in a database (such as content database 150 shown in Fig. 1 ) along with tag content 402.
  • the metadata can include an original content fingerprint (e.g., a hash of an original content), the original content URI, co-ordinates of the tag, a filename and a file size of a file that stores the tags, dimensions of the tags, and a date/time of creation.
  • the system also generates analysis data regarding the goods/services, product descriptions, person/product names, or product purchasing instructions in connection with goods/services identified in a tag.
  • Fig. 4 is a flowchart from the perspective of a server process 400 for generating tagged content.
  • the process receives a request from a customer (e.g., marketer).
  • the request includes the marketing content (e.g., information to be tagged) and customer credentials (e.g., a token) for identifying the customer.
  • the request also includes a fingerprint (e.g., a hash) of an original content and tag placement coordinates.
  • the hash is generated by a client component such as a plug-in or a browser extension.
  • the process retrieves (at step 406) additional customer information from a customer profile database.
  • customer information can include but is not limited to a customer ID, a last login date, and a time last tagged, a total tag count of the customer, etc.
  • the process queries a content database with the fingerprint included in the customer's request.
  • the content database can include tags, tagged content, fingerprints of original content, tag placement coordinates with respect to the original content, and the like.
  • the process compares the fingerprint in the customer's request with fingerprints stored in the content database.
  • the server processes (step 414) the query response. For example, the process updates a record of the stored fingerprint with the information included in the customer's request. If, based on the stored information in the database, there are multiple tags associated with the original content, then the process analyzes tag placement coordinates for each tag and for each customer to perform a reorganization of the tag placement coordinates. This is done so that multiple tags embedded on the same original content do not "bump" into one another. In some applications, the process determines that the total number of tags associated with an original content does not exceed a threshold.
  • the process determines that the total number of tags associated with an original content exceeds a threshold, then the process notifies the customer with a message to replace multiple individual tags with a "super tag.” For example, if the user sees the original content, and it happens to include 12 tags, and the user only wants to see 5 tags or less at a given time, then, on the user's device, the individual tags are not necessarily rendered, but instead, the individual tags are replaced by one super tag usually in the corner, alerting the user that the image includes 12 tags. If the customer expresses that he or she doesn't want to include their marketing content as part of a super tag, then the process updates the tag placement coordinates for the customer in a manner such that the super tag and the customer's target tag are offset relative to each other.
  • the process Upon determining that the fingerprint in the customer's request does not match with a fingerprint stored in the content database, then the process creates a new original content record with a new record ID. The newly-created record is updated with the tag placement coordinate provided by the customer and other relevant information.
  • the process prepares rendering metadata. For example, the process creates a response data structure with metadata relevant to the tagged content for presentation to a customer via the Tag API. The data structure updates the user interface of the customer.
  • the response is sent to the customer. In some applications, the process logs the response along with a time stamp for generating customer usage statistics. The process exits thereafter.
  • Fig. 5 is a process flow showing steps taken by a customer for creating tagged digital content.
  • a customer e.g., a marketer logs into the system which is typically via a customer portal running at a server.
  • the portal requests the customer for his or her login credentials to log into the system.
  • the system authenticates the customer via an authentication process 501 .
  • the system determines whether the customer is authenticated. If the customer is not authenticated, then the process loops back to step 500. After successful authentication, the portal presents a homepage/dashboard corresponding to the customer.
  • the customer navigates through the portal to launch the tagging user interface (Ul).
  • Ul tagging user interface
  • the system prompts the customer to enter the URI of the original content that the customer wishes to tag.
  • the customer enters the URI of the original content at step 504.
  • the URI for example, can be for an online library of images.
  • the server retrieves the original content from the URI.
  • the system includes a server and a client component (e.g., a plug-in or a browser extension).
  • the client component parses the information it receives from the server to generate a framed view of the original content which is presented to the customer via the tagging user interface.
  • the customer selects, via the tagging user Interface, an object such as an image within the original content.
  • the original content for example, can be a webpage of a goods/services or a webpage displaying a news article and an image selected by a customer can be a baseball bat or a golf shirt included in the original content.
  • the client component makes an asynchronous call to Tag API 120 requesting a hash.
  • the Tag API uses the Content Fingerprinting services to generate a hash (i.e., a unique fingerprint) by running the original content through a hashing process.
  • the hashing process is based on a setting by the marketer - the default setting is to execute the hashing process on the client side. If, however, the marketer's setting is not set to default setting, then the hashing process is performed by the server.
  • Fingerprint of an original content fingerprint 506 may be produced by using technology such as hashing, URI hashing, perceptual image hashing, or other form of algorithmic process by which a universally unique identifier can be generated based on processing the original content.
  • the original content is not saved by the client component as part of the hashing process.
  • the customer places one or more tags associated with one or more objects included in the original content.
  • the customer creates tag content.
  • the tagging Ul requests the customer to enter tag details such as a tag name/title, a tag content type (e.g., a URI, text, media), and content to be included in the tag, x and y coordinates for placement of the tag, etc.
  • the client component allows the customer to navigate/move around the tag with respect to the original content for placing the tag. The position (coordinates) at which the customer finalizes placement is noted as the final location of the tag.
  • the generated fingerprint hash key is returned and stored in a memory associated with the client component for the duration of the tag process.
  • the customer requests preview of the tagged content which is processed at step 510.
  • the tagged content is sent via Tag API 120 for storage in a content database coupled to the server.
  • the tag is overlaid (at step 513) on the original content.
  • the process determines if the customer intends to create more tags. If yes, then the process loops back to step 508 for creating additional tags. If no, the tagged content is published (at step 517) and the customer is returned (at step 519) to the dashboard of the customer portal.
  • the process submits (via Tag API 120) a publish request at step 518 to the server.
  • Fig. 6 shows example data stored in a database associated with tagged information for multiple customers. This figure depicts how the same original content 201 is tagged by multiple customers, customer A 607a, customer B 607b, ... customer X 607x. Each customer selects the same original content which, for example, is an image taken at a fashion show where the content of the image contains content from multiple customers. Each customer creates tag content and submits the tag content to the Tag API 120.
  • the Tag API 120 receives each customer request sequentially, or simultaneously, creates the first single original content record with the original content fingerprint 606, metadata 700 and assigns a sequential record ID number, e.g., content A 701 , content B 712, content n 713. Subsequent customer tags and tag content are appended with each customer request to store and publish tag configuration to the original content using the content management services 160, which stores the tags and tag content in content database 150.
  • Fig. 7 is a screenshot of a tag placement editor for placing tags on an image.
  • Fig. 7 illustrates how the customer uses the tag placement editor 401 to identify the x coordinate 801 and y coordinate 802 for placing a tag at location 803 over the original content 201 .
  • Fig. 8 is a screenshot showing co-ordinates of a tag with respect to an image. This figure illustrates how the server identifies the tag placement 803 coordinates relative to the original content container.
  • an image is shown within a webpage included in a web browser.
  • the client component requests the server to identify the x and y co-ordinates. For example, a placement location where the customer clicked on the image within the user interface.
  • the server parses the content container 900 markup (e.g., HTML or XHTML) and using Javascript programmatic event information, the server determines a relative and an absolute information for positioning the tags.
  • the server computes dimensional information (e.g., width, height, distance from bottom, distance from right, etc.) about the content. This information allows the server to determine whether the original content has specific scaling markup. This is beneficial because it reduces the dimensions of the rendered media, and hence results in faster rendering of tagged content. This information can also be used by the server to recognize a need to scale or rotate the tags, convert the tags to a super-tag or hide the tags and enable the tag content bar in its place.
  • dimensional information e.g., width, height, distance from bottom, distance from right, etc.
  • Fig. 9 is a block diagram showing portable tags and tag content rendering upon original content across multiple distributions.
  • Fig. 9 depicts a scenario where original content (an image of a pair of sunglasses) is distributed across multiple channels.
  • channel A 101 1 is associated with a travel blog website and channel B 1012 is a resort marketing web site.
  • the original content is also distributed through other channels such as channel X 1013 which corresponds to a mobile device.
  • the server generates the same fingerprint for the browser application 1014 and the mobile application 1015.
  • the server requests tag information from the Tag API 120.
  • the tag information is used for querying the content database for the original content 201 by matching the fingerprint 606 and returning tag 710, tag 720, tag content 712, and tag content 721 to each distribution channel.
  • the server enables portability of original content tags because the same, unaltered image is distributed via different channels generates the same fingerprint hash regardless of the URI associated with the content.
  • Fig. 10 is a screenshot of a dashboard associated with tagging images.
  • the dashboard is accessible via the world wide web.
  • a marketer intending to tag an image can access the dashboard by entering an address in region 1002.
  • the marketer is tagging multiple items included in the image 1012.
  • the tags in this example are shown to be numbered 1 , 2, 3 ... with tag #1 identifying a product, e.g. a kayak located in image 1012.
  • the name of the product can be entered in region 1004 of the dashboard.
  • the brand, manufacturer or the designer of the product is entered in region 1006.
  • the tag content type is a target URI and a description of the product.
  • the target URI (entered in region 1008 of the dashboard) can provide additional information about the product.
  • a description of the product can be provided in region 1010 of the dashboard.
  • the lower part of the dashboard reveals various tabs, e.g., tags (tab 1013), insights (tab 1014), analysis (tab 1016), metadata (tab 1018), notes (tab 1020), and activity (tab 1022).
  • tags e.g., tags (tab 1013), insights (tab 1014), analysis (tab 1016), metadata (tab 1018), notes (tab 1020), and activity (tab 1022).
  • tags centrally via the dashboard can cause changes to be permeated at multiple webpages or websites where the tagged image is used or rendered.
  • Another patentable advantage of the present technology is that the tagged images are rendered in unaltered form, regardless or agnostic of the way (e.g. , via an app or a webpage) or the device (e.g., any type of electronic device) that the tagged image is accessed.
  • a visitor browsing webpage A can view the tagged image.
  • Embodiments of the present technology do not limit the number of phases or rounds that a tagged image can be linked in order to be viewed as a tagged image.
  • yet another advantage of the present technology is that the tags in the tagged image remain intact or consistent across multiple copies or impressions of the image.
  • the disclosed system can generate metadata derived from user interactions with the tagged content (e.g., data analytics and user statistics in connection with user preferences for goods/services).
  • the analytics collected can be provided to the same customer or other customers for assistance in creating marketing content.
  • user behavior and brand behavior can be predicted using statistical learning methods.
  • the disclosed system might create and offer filters to customers/marketers that can identify users who have demonstrated certain behaviors to receive special or different/specialized tags which is likely to encourage select users to pay particular attention to a brand. As an example, a person who has engaged frequently with tagged images associated with a given brand of shoes might be invited to look at and vote on next year's product line.
  • Fig. 1 1 is an example screenshot of a user interface showing a tagged image.
  • the tags are shown by the circled letter "Q" in Fig. 1 1 .
  • the user interface can reveal a search functionality (e.g., shown in region 1 102 in Fig. 1 1 ) to search for other images with the same product, obtain additional information about a product or a brand, engage in a live chat messaging with brand representatives, and/or buy a product.
  • a search box can be displayed with options for the user to view the same product on different models, in different backgrounds, lighting conditions, formal/informal settings, different colors/shades of the same product and the like.
  • the search box can display options for the same type/category of product from different brands, such as LLBEAN, PATAGONIA, and EDDI E BAUER. Accordingly, a user can select the options he or she desires to run a search.
  • the results of the search may include images.
  • thumbnails of the images are displayed. A user can click on an image thumbnail to view an enlarged image.
  • Fig. 12 is another example screenshot of a user interface showing a tagged image.
  • the disclosed system can display (via the user interface) a scrolling wheel (e.g., element 1202 in Fig. 12) or a sliding bar to browse through the results of the search associated with a tagged image.
  • a scrolling wheel e.g., element 1202 in Fig. 12
  • the scrolling wheel or the sliding bar can be rotated/moved to allow the person to view different colors/shades of the same product, e.g. , from a certain brand/manufacturer.
  • a user via the user interface, a user can try out different colors of a sweater or different shades of lipstick on a model's torso or lips in a tagged image.
  • the system allows the different shades/colors to be overlaid or superimposed on the original image (or, a portion thereof) without altering or modifying the original image. For example, when the user closes the tag, the user can see the original content without modification.
  • Embodiments of the present technology allow object recognition (e.g., using Artificial Intelligence or Machine Learning methods) by scanning an image to identify different objects (such as lips, eyes, face, eyelashes, cheek, ears etc.) in an image. Upon identification of an object, the disclosed system can automatically overlay a tag on that object.
  • rotating the scrolling wheel or moving the sliding bar can display a different model wearing, or otherwise associated with the product in the image.
  • the scrolling wheel is shown on the user interface in the form of a partial wheel or an arc, which can suggest to the person that there are options to view additional images.
  • a person can scroll through several images to find a model in an image that resembles (e.g., in terms of looks or skin complexion) the person, and upon finding such an image, if the person does not like the shade of the product (e.g., a sweater) associated with the model, the disclosed technology allows the person to freeze the image of the model and further try out, cascade or otherwise provision different colors/shades of the product (e.g., the sweater) on the model.
  • the present technology is providing better user experience when a user is interacting with tagged images for delivering content that may be of interest to the user which can lead to a sale of the marketer's product/services.
  • an image can be owned by a brand or it can be owned by other entities such as a photographer.
  • the system allows brands to register their images by associating digital rights (e.g., copyrights) to their images. This helps in protecting the rights of images and prevents illegal distribution. For example, a photographer who owns an image can claim the digital rights to the image.
  • Fig. 13 is a flowchart showing steps by a computer for generating a report in connection with managing digital rights. The process starts at step 1302 and receives (at step 1304) log-in credentials of a person (e.g., owner of an image) for managing digital rights. If the log in credentials are authenticated (at step 1306), the process displays (at step 1308) a user interface.
  • One or more images uploaded by the person is received (at step 1310) via the user interface.
  • the process generates a fingerprint of the image and matches (at step 1314) the fingerprint with fingerprints of images stored in a database. If there is no match, the process stores (at step 1316) the fingerprint in a database. If there is a match, the process receives (at step 1318) digital rights content and appends (at step 1320) it to the fingerprint. The appended fingerprint is saved in a digital rights database at step 1324.
  • the process checks if the fingerprint of an image rendered on a user's device is included in the digital rights database for a match between the appended fingerprints and fingerprints stored in the digital rights database.
  • the digital rights database and the contents database are the same, or they can be different databases. If there is a match, the process parses (at step 1332) image metadata.
  • the image metadata can be located in the contents database.
  • the metadata along with additional information can also be added to the digital rights database.
  • the additional information can include fields to capture the image fingerprint such as a name and contact information of the image owner, licensing instructions, and the like.
  • step 1334 further data is added - to capture the URL and time stamp of where (e.g., IP address) and when (e.g., a timestamp) an image was rendered.
  • Such data can be useful for a customer/brand and also the image owner, e.g., with the intent of licensing the image.
  • the owner may initiate a request to receive a report, in some format, of all or filtered instances in which their image/images was/were rendered on users' devices. For example, an owner may want a report detailing all instances in which their image of the tennis player Federer at a given tournament was rendered on an URL belonging to the Canadian branch of ESPN.
  • the system queries the digital rights database to retrieve all records meeting the report filtering criteria.
  • the report might show the count of times the image was rendered in URLs belonging to the Canadian branch of ESPN, the count of instances with all ESPN URLs, further displaying the count in monthly buckets, and then separating by time of day or geographical areas within Canada.
  • the report can include details of instances when the tagged image was presented to the user. Examples of details can be a time stamp each time the tagged image was rendered, brand(s) associated with the tagged image, and a number of times the tag was clicked by a customer.
  • the process organizes and formats the results (e.g., from step 1338) into a useful format suitable for human or machine reading. The process ends at step 1342.
  • machine or computer-readable media include, but are not limited to, recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Discs, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
  • recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Discs, (DVDs), etc.
  • CD ROMS Compact Disk Read-Only Memory
  • DVDs Digital Versatile Discs
  • transmission type media such as digital and analog communication links.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Strategic Management (AREA)
  • Finance (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Methods and systems for a customer to create marketing content that can be rendered (or overlaid) over digital content, without modifying the content. For example, the digital content can be in the form of webpages, images, videos, audio/visual content, or any other digital media and creating the digital content involves tagging the digital content with additional data to generate tagged digital content. In some embodiments, tagged digital content includes adding or embedding data that describes one or more items included in a webpage or a video, e.g., a good/service or an object included in the original content. A tagged item can be presented or rendered to a user on a user's electronic device. Usually, a tagged item is a good/service or an object included in the original content integrated with additional data.

Description

TRANSPORTABLE MARKETING CONTENT OVERLAID ON
DIGITAL MEDIA
TECHNICAL FIELD
[0001] The disclosed technology is in the technical field of digital information. More particularly, the disclosed technology is in the technical field of embedding transportable marketing content on digital media.
BACKGROUND
[0002] Conventionally, digital marketing content is created based on identifying patterns in user browsing behaviors on webpages. Typically, the marketing content is focused exclusively on a single web page viewed by an internet user. This makes it difficult to make the digital content portable across multiple content channels. In some scenarios, server side scripts are injected by a server to distribute marketing content in more than one web page. However, when multiple advertising or marketing scripts are injected at the same time, the client web browser is subjected to extensive processing. As a result, user experience is impacted. This may lead a user to leave a web page. Further, the marketing content being delivered to a user browsing a webpage may not be related in context to the content of the webpage and not necessarily relevant to the user. Thus, there is a need to design a system that can provide contextual marketing content with quality and accuracy and can be portable across multiple channels.
SUMMARY
[0003] According to some embodiments, the disclosed technology are methods and systems for a customer to create marketing content that can be rendered (or overlaid) over digital content, without modifying the original content. For example, the digital content can be in the form of webpages, images, videos, audio/visual content, or any other digital media and creating the digital content involves tagging the digital content with additional data to generate tagged digital content. In some embodiments, tagged digital content includes adding or embedding data that describes one or more items included in a webpage or a video, e.g., a good/service or an object included in the original content. Thus, a tagged item is a good/service or an object included in the original content integrated with additional data. In some embodiments, a tagged item allows the item to be browsed online or even searched using a search engine. In some embodiments, tagged information can be viewed regardless of where the content is being viewed from, or how it is being viewed. In some embodiments, tags and/or tagged content can be shared by a user with other users of the disclosed system. In some embodiments, tags and/or tagged content can provide information about the good/service/product to a person viewing the tagged image. In some embodiments, multiple goods/services/products can be compared by comparing the tags associated with the images of the goods/services/products.
[0004] For purposes of illustrative discussions herein, a customer is a marketer who intends to advertise his or her goods and services by creating marketing content using embodiments of the disclosed technology. A user is a consumer who views or reviews (e.g., text, audio, images, video, or a combination thereof) the marketing content created by the customer. In a hypothetical scenario, a customer is a brand marketing manager for a sports gear manufacturer who is creating marketing content by tagging images associated with a golf club. The images that can be tagged do not necessarily have to be limited to the images of goods offered by the customer (e.g., the sports gear manufacturer) and can be proprietary images or freely available online. In some implementations, such images can be provided (by the disclosed system) to the customer/marketer/brand manufacturer. In some implementations, images can be provided by a customer for being tagged by the disclosed system. A user/consumer can be a person who is interested in the golf club or otherwise other goods of the company brand and is a potential purchaser of the golf club.
[0005] In some embodiments, the marketing content created by a customer is portable across distribution channels. For example, the disclosed system employs one or more hashing technologies to identify untagged copies of the original content and allows pairing the original content with the tag(s) provided by the customer.
[0006] The advantages of the disclosed technology include, without limitation, that the tagged content is portable for rendering across different distribution channels. For example, a photograph is taken during a celebrity event, the photograph of a celebrity wearing a particular shade of lipstick. The photograph is first distributed through a news channel website but subsequently distributed as part of original content through a content distribution network and multiple other websites. The customer marketing team identifies that the celebrity is wearing their brand of lipstick and tags the image with the details of the goods and adds extra marketing content. Any user of the disclosed system can see the tagged content if the celebrity image is distributed via the news channel website, a fan blog website, the lipstick brand website, the celebrity event website, or other websites. The same tagged content is rendered whether the user is using a browser based technology or mobile application, or other form of user interface using the technology as disclosed herein.
[0007] Further, a user may be in a physical location and interacting with a digital interaction point (e.g. , an interactive digital signage), where the same celebrity image is being rendered. In disclosed embodiments, the user is able to click upon the tag and review detailed information about the goods. Potential usage scenarios for this embodiment include, but are not limited to, interactive shopping displays, digital signage, digital concierge, etc. In some instances, when a shopper in a beauty or clothing apparel store interacts with digital content and associated tags, the disclosed technology can display tags along with goods availability information for that store location.
[0008] The disclosed system generates digital fingerprints associated with original content. The digital fingerprints can help in identifying the original content regardless of where the content is rendered or distributed. Further, the system can generate metadata derived from user interactions with the tagged content (e.g., data analytics and user statistics in connection with user preferences for goods/services). In some applications, the analytics collected can be provided to the same or other customers for assistance in creating marketing content.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] A clear understanding of the key features of the disclosed technology summarized above may be obtained by reference to the appended drawings, which illustrate the method and system of the disclosed technology, although it will be understood that such drawings are illustrative and, therefore, are not to be considered as limiting its scope with regard to other embodiments which the disclosed technology is capable of contemplating. [0010] Fig. 1 illustrates an exemplary architecture of a content tagging and tag rendering system.
[0011] Figs. 2A and 2B are examples of interfaces displayed to a user who is viewing tagged content.
[0012] Fig. 3 is an example of an interface displayed to a customer who is creating marketing content.
[0013] Fig. 4 is a flowchart from the perspective of a server that generates the marketing content.
[0014] Fig. 5 is a process flow showing steps taken by a customer for creating tagged digital content.
[0015] Fig. 6 shows example data stored in a database associated with tagged information for multiple customers.
[0016] Fig. 7 is a screenshot of a tag placement editor for placing tags on an image.
[0017] Fig. 8 is a screenshot showing co-ordinates of a tag with respect to an image included in an original content.
[0018] Fig. 9 is a block diagram showing portable tags and tag content rendering upon original content across multiple distributions.
[0019] Fig. 10 is a screenshot of a dashboard associated with tagging an example image with tags.
[0020] Fig. 1 1 is an example screenshot of a user interface showing a tagged image showing a search functionality.
[0021] Fig. 12 is an example screenshot of a user interface showing a tagged image showing a scrolling wheel functionality.
[0022] Fig. 13 is a flowchart showing steps by a computer for generating a report in connection with managing digital rights.
DETAILED DESCRIPTION
[0023] Referring now to the technology in more detail, Fig. 1 illustrates an exemplary architecture of a digital content system 100. Digital content system 100 includes content management services 160 connected to user interfaces 101 a, 101 b, and 101 c via network 102. Content management services 160 are also connected to content database 150 that stores one or more of digital content (such as videos and images), fingerprints (e.g., hashes) of images, and tagged content. User interfaces 101 a, 101 b, and 101 c are coupled to user computing devices such as desktop computers, laptop computers, mobile phones, consumer wearable devices, and tablet computers. In some embodiments, the content management services 160 include a customer interface 1 10, tag API 120, tag manager 130, and content fingerprinting 140.
[0024] Customer interface 1 10 is a graphical user interface (e.g., web browser, desktop or mobile) which communicates with content manager application service 1 1 1 . Content manager application service 1 1 1 provides the application services to create, manage, and remove any content associated with a tag for a customer. Tag API 120 is a cloud-based service-oriented architecture providing an application programming interface for application service components provided by content manager application service 1 1 1 . Tag API 120 includes request processor 121 which provides the application services and logic to respond to and route service requests and application events. Tag manager application service 130 provides the application services to create, manage, merge, update, and remove tags and tag content data structures. Tag manager application service 130 includes tag extractor application service 131 . Tag extractor application service 131 provides the functionality to search and retrieve content database 150 for tags and return the tag data in a suitable format. Content Fingerprinting 140 provides application services to support the creation, management, audit, and removal of content fingerprint data structures. Content Fingerprinting 140 includes fingerprint generator 141 which provides application services for the creation of unique hash fingerprints to various content types such as, but not limited to, digital media, text content, video, or audio.
[0025] Figs. 2A and 2B are examples of interfaces 200A and 200B displayed to a user (e.g., a consumer) who is viewing tagged content. For example, the user interface 101 in Fig. 2A displays original content 201 (e.g., an image), tags 203 numbered in the image as tag #1 , tag #2, and tag #3, a tag content bar 202, a tag count 204, and a hide tags button 205 (for toggling the tags on and off). For purposes of discussions and illustrations herein, the term "original content" generally refers to any kind of digital media (e.g., images and, videos, audio, text, and HTML content). [0026] According to disclosed embodiments, tags are associated with objects or content included in images, text, and videos. Objects or content included in images and videos can be various goods or services provided by one or more customers (marketers). In this example, tag # 1 is in connection with a pair of sunglasses included in original content 201 . Tag #2 is in connection with a table on which the sunglasses are placed. Tag #3 is in connection with a drink placed on the table and in close proximity to the sunglasses. Tags 203 can provide information about goods/services, the brand, manufacturer, price, stores/retail locations where the objects are available, and other such information. In some embodiments, the tag content bar 202 is invisible. In some embodiments, the tag content bar 202 can be made visible. Tag content bar 202 provides a user with abilities to access personal preference settings, such as but not limited to, tag color preference, tag visibility preference, the number of tags on the original content, different categories of tags, one or more brands of preference, one or more alerts/notifications about the product/service. In some embodiments, tag content bar 202 enables a user to access a search interface to search saved tags or shared tags. Settings 206 allow a user to access their personal information profile, modify email address, contact preferences, and view social affiliations/aspects of tags. Fig. 2B also indicates tag content rendering menu 301 . In this example, tag content rendering menu 301 displays content A (e.g., a brand of a pair of sunglasses), content B (e.g., a model name of a pair of sunglasses), content C (e.g., available sizes and color variations), and content D (e.g., rich media, and other supporting materials) associated with tag #1 . That is, a tag can be associated with one or more digital content overlaid on an original (untagged) image for being rendered to a user via a user interface. In some embodiments, a tag identifier can be a logo, e.g., of a brand. For example, a tag identifier can be a "Swoosh" logo of the NIKE® brand. Thus, one advantage of the present technology is that the tagging is flexible so that marketers can alter a tag identifier. Thus, in embodiments of the present technology, tags can be colored differently, designed, provide information about sale/clearances, or otherwise customized in accordance with a marketer or a brand's specifications. In some embodiments, the customization can include the system providing recommendations targeted to a user who is interacting with a tagged image based on monitoring his or her browsing habits/behavior. Based on monitoring the browsing habits/behavior, the system can suggest alternate products from the same brand, products from different brands, or any other suitable product/brand based on correlating the user's browsing behavior with browsing behaviors of other end users who have browsed the product.
[0027] In another use-case, educators may tag online images with educational content (such as text, documents, videos, links to relevant articles, student assignments, quiz questions, etc.). In accordance with disclosed embodiments, tags can be seen only on images for registered users, e.g., students who have signed up for an instructor's class. An instructor can provide a unique identifier (password) to his/her students. When a student uses the identifier to sign on to his/her user account, the system retrieves the relevant tagged images for display to the student.
[0028] Fig. 3 is an example of an interface 300 displayed to a customer who is creating marketing content for being delivered to end users. Interface 300 includes address bar displaying URI 301 of the original content. URI 301 is a Uniform Resource Indicator and for example in Fig. 3 is depicted as a web location that stores original content 201 . For example, the URI 301 can correspond to an image stored at a third-party image repository or a content provider such as Getty Images®. A customer can place tags on an image using a tag placement editor 401 . The customer creates tag content 402 by entering information, e.g., the customer can enter URI 301 of the original content in customer interface 1 10.
[0029] Content created by a customer via the customer interface 1 10 is rendered to the user via the tag content rendering menu 301 of Ul 101 . Tag content 402 includes original content (e.g., a file name and a URI), a tag identifier (e.g., tag #1 or otherwise, a system-generated identifier), co-ordinates of the tag with respect to the original content, and the content (e.g., Content A, Content B, Content C, and Content D) that is to be included as tags. In some embodiments, the disclosed system automatically generates metadata (corresponding to the tags) which is stored in a database (such as content database 150 shown in Fig. 1 ) along with tag content 402. For example, the metadata can include an original content fingerprint (e.g., a hash of an original content), the original content URI, co-ordinates of the tag, a filename and a file size of a file that stores the tags, dimensions of the tags, and a date/time of creation. In some embodiments, the system also generates analysis data regarding the goods/services, product descriptions, person/product names, or product purchasing instructions in connection with goods/services identified in a tag.
[0030] Fig. 4 is a flowchart from the perspective of a server process 400 for generating tagged content. At step 404, the process receives a request from a customer (e.g., marketer). The request includes the marketing content (e.g., information to be tagged) and customer credentials (e.g., a token) for identifying the customer. The request also includes a fingerprint (e.g., a hash) of an original content and tag placement coordinates. In some embodiments, the hash is generated by a client component such as a plug-in or a browser extension. In some embodiments, the process retrieves (at step 406) additional customer information from a customer profile database. Examples of customer information can include but is not limited to a customer ID, a last login date, and a time last tagged, a total tag count of the customer, etc. At step 410, the process queries a content database with the fingerprint included in the customer's request. The content database can include tags, tagged content, fingerprints of original content, tag placement coordinates with respect to the original content, and the like. The process compares the fingerprint in the customer's request with fingerprints stored in the content database.
[0031] Upon determining that the fingerprint in the customer's request matches with a fingerprint stored in the content database, then the server processes (step 414) the query response. For example, the process updates a record of the stored fingerprint with the information included in the customer's request. If, based on the stored information in the database, there are multiple tags associated with the original content, then the process analyzes tag placement coordinates for each tag and for each customer to perform a reorganization of the tag placement coordinates. This is done so that multiple tags embedded on the same original content do not "bump" into one another. In some applications, the process determines that the total number of tags associated with an original content does not exceed a threshold. If the process determines that the total number of tags associated with an original content exceeds a threshold, then the process notifies the customer with a message to replace multiple individual tags with a "super tag." For example, if the user sees the original content, and it happens to include 12 tags, and the user only wants to see 5 tags or less at a given time, then, on the user's device, the individual tags are not necessarily rendered, but instead, the individual tags are replaced by one super tag usually in the corner, alerting the user that the image includes 12 tags. If the customer expresses that he or she doesn't want to include their marketing content as part of a super tag, then the process updates the tag placement coordinates for the customer in a manner such that the super tag and the customer's target tag are offset relative to each other.
[0032] Upon determining that the fingerprint in the customer's request does not match with a fingerprint stored in the content database, then the process creates a new original content record with a new record ID. The newly-created record is updated with the tag placement coordinate provided by the customer and other relevant information. At step 418, the process prepares rendering metadata. For example, the process creates a response data structure with metadata relevant to the tagged content for presentation to a customer via the Tag API. The data structure updates the user interface of the customer. At step 422, the response is sent to the customer. In some applications, the process logs the response along with a time stamp for generating customer usage statistics. The process exits thereafter.
[0033] Fig. 5 is a process flow showing steps taken by a customer for creating tagged digital content. At step 500, a customer (e.g., a marketer) logs into the system which is typically via a customer portal running at a server. The portal requests the customer for his or her login credentials to log into the system. The system authenticates the customer via an authentication process 501 . At step 502, the system determines whether the customer is authenticated. If the customer is not authenticated, then the process loops back to step 500. After successful authentication, the portal presents a homepage/dashboard corresponding to the customer. At step 503, the customer navigates through the portal to launch the tagging user interface (Ul). The system prompts the customer to enter the URI of the original content that the customer wishes to tag. The customer enters the URI of the original content at step 504. The URI, for example, can be for an online library of images. The server retrieves the original content from the URI.
[0034] In some embodiments, the system includes a server and a client component (e.g., a plug-in or a browser extension). At step 505, the client component parses the information it receives from the server to generate a framed view of the original content which is presented to the customer via the tagging user interface. The customer selects, via the tagging user Interface, an object such as an image within the original content. The original content, for example, can be a webpage of a goods/services or a webpage displaying a news article and an image selected by a customer can be a baseball bat or a golf shirt included in the original content. The client component makes an asynchronous call to Tag API 120 requesting a hash. The Tag API uses the Content Fingerprinting services to generate a hash (i.e., a unique fingerprint) by running the original content through a hashing process. In some embodiments, the hashing process is based on a setting by the marketer - the default setting is to execute the hashing process on the client side. If, however, the marketer's setting is not set to default setting, then the hashing process is performed by the server. Fingerprint of an original content fingerprint 506 may be produced by using technology such as hashing, URI hashing, perceptual image hashing, or other form of algorithmic process by which a universally unique identifier can be generated based on processing the original content. In some implementations, the original content is not saved by the client component as part of the hashing process. At step 507, the customer places one or more tags associated with one or more objects included in the original content. At step 508, the customer creates tag content. The tagging Ul requests the customer to enter tag details such as a tag name/title, a tag content type (e.g., a URI, text, media), and content to be included in the tag, x and y coordinates for placement of the tag, etc. In some implementations, the client component allows the customer to navigate/move around the tag with respect to the original content for placing the tag. The position (coordinates) at which the customer finalizes placement is noted as the final location of the tag. In some embodiments, the generated fingerprint hash key is returned and stored in a memory associated with the client component for the duration of the tag process. At step 509, the customer requests preview of the tagged content which is processed at step 510. At step 51 1 , the tagged content is sent via Tag API 120 for storage in a content database coupled to the server. After a content lookup process 512 is initiated, the tag is overlaid (at step 513) on the original content. The customer clicks (at step 514) the tag and the tag content is rendered (at step 515). At step 516, the process determines if the customer intends to create more tags. If yes, then the process loops back to step 508 for creating additional tags. If no, the tagged content is published (at step 517) and the customer is returned (at step 519) to the dashboard of the customer portal. In some implementations, the process submits (via Tag API 120) a publish request at step 518 to the server.
[0035] Fig. 6 shows example data stored in a database associated with tagged information for multiple customers. This figure depicts how the same original content 201 is tagged by multiple customers, customer A 607a, customer B 607b, ... customer X 607x. Each customer selects the same original content which, for example, is an image taken at a fashion show where the content of the image contains content from multiple customers. Each customer creates tag content and submits the tag content to the Tag API 120. The Tag API 120 receives each customer request sequentially, or simultaneously, creates the first single original content record with the original content fingerprint 606, metadata 700 and assigns a sequential record ID number, e.g., content A 701 , content B 712, content n 713. Subsequent customer tags and tag content are appended with each customer request to store and publish tag configuration to the original content using the content management services 160, which stores the tags and tag content in content database 150.
[0036] Fig. 7 is a screenshot of a tag placement editor for placing tags on an image. For example, Fig. 7 illustrates how the customer uses the tag placement editor 401 to identify the x coordinate 801 and y coordinate 802 for placing a tag at location 803 over the original content 201 .
[0037] Fig. 8 is a screenshot showing co-ordinates of a tag with respect to an image. This figure illustrates how the server identifies the tag placement 803 coordinates relative to the original content container. In this example, an image is shown within a webpage included in a web browser. Upon a customer placing a tag on original content 201 , the client component requests the server to identify the x and y co-ordinates. For example, a placement location where the customer clicked on the image within the user interface. The server parses the content container 900 markup (e.g., HTML or XHTML) and using Javascript programmatic event information, the server determines a relative and an absolute information for positioning the tags. This is achieved by using functions that return x or y coordinates of elements or markup tags within the markup. Identifying the absolute positioning of the upper most left of the original content 201 provides x and y co-ordinates for correct placement of tag x co-ordinate 901 and tag y co-ordinate 902. In some embodiments, the server computes dimensional information (e.g., width, height, distance from bottom, distance from right, etc.) about the content. This information allows the server to determine whether the original content has specific scaling markup. This is beneficial because it reduces the dimensions of the rendered media, and hence results in faster rendering of tagged content. This information can also be used by the server to recognize a need to scale or rotate the tags, convert the tags to a super-tag or hide the tags and enable the tag content bar in its place.
[0038] Fig. 9 is a block diagram showing portable tags and tag content rendering upon original content across multiple distributions. Fig. 9 depicts a scenario where original content (an image of a pair of sunglasses) is distributed across multiple channels. For example, channel A 101 1 is associated with a travel blog website and channel B 1012 is a resort marketing web site. The original content is also distributed through other channels such as channel X 1013 which corresponds to a mobile device. The server generates the same fingerprint for the browser application 1014 and the mobile application 1015. The server requests tag information from the Tag API 120. The tag information is used for querying the content database for the original content 201 by matching the fingerprint 606 and returning tag 710, tag 720, tag content 712, and tag content 721 to each distribution channel. The server enables portability of original content tags because the same, unaltered image is distributed via different channels generates the same fingerprint hash regardless of the URI associated with the content.
[0039] Fig. 10 is a screenshot of a dashboard associated with tagging images. In some embodiments, the dashboard is accessible via the world wide web. A marketer intending to tag an image can access the dashboard by entering an address in region 1002. In this example, the marketer is tagging multiple items included in the image 1012. The tags in this example are shown to be numbered 1 , 2, 3 ... with tag #1 identifying a product, e.g. a kayak located in image 1012. The name of the product can be entered in region 1004 of the dashboard. The brand, manufacturer or the designer of the product is entered in region 1006. In this example, the tag content type is a target URI and a description of the product. The target URI (entered in region 1008 of the dashboard) can provide additional information about the product. A description of the product can be provided in region 1010 of the dashboard. The lower part of the dashboard reveals various tabs, e.g., tags (tab 1013), insights (tab 1014), analysis (tab 1016), metadata (tab 1018), notes (tab 1020), and activity (tab 1022). Once an image is tagged, the tagged image can appear in multiple pages and/or websites on the world wide web, which links to the tagged image. One patentable advantage of the present technology is that the dashboard allows a centralized manner of updating or otherwise changing tagged images that results in the updates or changes being visible on multiple instances where the tagged image is used on the world wide web. Thus, changing the tags centrally via the dashboard can cause changes to be permeated at multiple webpages or websites where the tagged image is used or rendered. Another patentable advantage of the present technology is that the tagged images are rendered in unaltered form, regardless or agnostic of the way (e.g. , via an app or a webpage) or the device (e.g., any type of electronic device) that the tagged image is accessed.
[0040] Furthermore, in embodiments where there are cascaded links, e.g., a webpage A linking a webpage B which includes a link to a tagged image, a visitor browsing webpage A can view the tagged image. Embodiments of the present technology do not limit the number of phases or rounds that a tagged image can be linked in order to be viewed as a tagged image. Thus, yet another advantage of the present technology is that the tags in the tagged image remain intact or consistent across multiple copies or impressions of the image.
[0041] In some embodiments, the disclosed system can generate metadata derived from user interactions with the tagged content (e.g., data analytics and user statistics in connection with user preferences for goods/services). In some applications, the analytics collected can be provided to the same customer or other customers for assistance in creating marketing content. In some embodiments, user behavior and brand behavior can be predicted using statistical learning methods. In some embodiments, the disclosed system might create and offer filters to customers/marketers that can identify users who have demonstrated certain behaviors to receive special or different/specialized tags which is likely to encourage select users to pay particular attention to a brand. As an example, a person who has engaged frequently with tagged images associated with a given brand of shoes might be invited to look at and vote on next year's product line. The invitation for the person can be delivered by a content box of a tag in a tagged image of the shoes. [0042] Fig. 1 1 is an example screenshot of a user interface showing a tagged image. The tags are shown by the circled letter "Q" in Fig. 1 1 . In some embodiments, when a user clicks a tag on a user interface associated with the disclosed system, the user interface can reveal a search functionality (e.g., shown in region 1 102 in Fig. 1 1 ) to search for other images with the same product, obtain additional information about a product or a brand, engage in a live chat messaging with brand representatives, and/or buy a product. For example, upon clicking on a tagged image showing a model wearing a gray sweater, a search box can be displayed with options for the user to view the same product on different models, in different backgrounds, lighting conditions, formal/informal settings, different colors/shades of the same product and the like. In some embodiments, the search box can display options for the same type/category of product from different brands, such as LLBEAN, PATAGONIA, and EDDI E BAUER. Accordingly, a user can select the options he or she desires to run a search. As a result, one or more search results are displayed by the disclosed system. The results of the search may include images. In some embodiments, thumbnails of the images are displayed. A user can click on an image thumbnail to view an enlarged image.
[0043] Fig. 12 is another example screenshot of a user interface showing a tagged image. In some embodiments, the disclosed system can display (via the user interface) a scrolling wheel (e.g., element 1202 in Fig. 12) or a sliding bar to browse through the results of the search associated with a tagged image. When the person viewing the user interface makes a selection of an image, the scrolling wheel or the sliding bar can be rotated/moved to allow the person to view different colors/shades of the same product, e.g. , from a certain brand/manufacturer. In a hypothetical example, via the user interface, a user can try out different colors of a sweater or different shades of lipstick on a model's torso or lips in a tagged image. The system allows the different shades/colors to be overlaid or superimposed on the original image (or, a portion thereof) without altering or modifying the original image. For example, when the user closes the tag, the user can see the original content without modification. Embodiments of the present technology allow object recognition (e.g., using Artificial Intelligence or Machine Learning methods) by scanning an image to identify different objects (such as lips, eyes, face, eyelashes, cheek, ears etc.) in an image. Upon identification of an object, the disclosed system can automatically overlay a tag on that object. This is an advantage as it provides automated tagging of images without necessarily requiring manually tagging of images. In some applications, rotating the scrolling wheel or moving the sliding bar can display a different model wearing, or otherwise associated with the product in the image. In some embodiments, the scrolling wheel is shown on the user interface in the form of a partial wheel or an arc, which can suggest to the person that there are options to view additional images.
[0044] In another example, a person can scroll through several images to find a model in an image that resembles (e.g., in terms of looks or skin complexion) the person, and upon finding such an image, if the person does not like the shade of the product (e.g., a sweater) associated with the model, the disclosed technology allows the person to freeze the image of the model and further try out, cascade or otherwise provision different colors/shades of the product (e.g., the sweater) on the model. Thus, one advantage of the present technology is providing better user experience when a user is interacting with tagged images for delivering content that may be of interest to the user which can lead to a sale of the marketer's product/services. Moreover, while embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
According to disclosed embodiments, an image can be owned by a brand or it can be owned by other entities such as a photographer. In some embodiments, the system allows brands to register their images by associating digital rights (e.g., copyrights) to their images. This helps in protecting the rights of images and prevents illegal distribution. For example, a photographer who owns an image can claim the digital rights to the image. Fig. 13 is a flowchart showing steps by a computer for generating a report in connection with managing digital rights. The process starts at step 1302 and receives (at step 1304) log-in credentials of a person (e.g., owner of an image) for managing digital rights. If the log in credentials are authenticated (at step 1306), the process displays (at step 1308) a user interface. One or more images uploaded by the person is received (at step 1310) via the user interface. At step 1312, the process generates a fingerprint of the image and matches (at step 1314) the fingerprint with fingerprints of images stored in a database. If there is no match, the process stores (at step 1316) the fingerprint in a database. If there is a match, the process receives (at step 1318) digital rights content and appends (at step 1320) it to the fingerprint. The appended fingerprint is saved in a digital rights database at step 1324. At step 1326, the process checks if the fingerprint of an image rendered on a user's device is included in the digital rights database for a match between the appended fingerprints and fingerprints stored in the digital rights database. In some embodiments, the digital rights database and the contents database (e.g., contents database 150 in Fig. 1 ) are the same, or they can be different databases. If there is a match, the process parses (at step 1332) image metadata. The image metadata can be located in the contents database. The metadata along with additional information can also be added to the digital rights database. The additional information can include fields to capture the image fingerprint such as a name and contact information of the image owner, licensing instructions, and the like. In step 1334, further data is added - to capture the URL and time stamp of where (e.g., IP address) and when (e.g., a timestamp) an image was rendered. Such data can be useful for a customer/brand and also the image owner, e.g., with the intent of licensing the image. In step 1336, through the Ul, the owner may initiate a request to receive a report, in some format, of all or filtered instances in which their image/images was/were rendered on users' devices. For example, an owner may want a report detailing all instances in which their image of the tennis player Federer at a given tournament was rendered on an URL belonging to the Canadian branch of ESPN. In step 1338, the system queries the digital rights database to retrieve all records meeting the report filtering criteria. The report might show the count of times the image was rendered in URLs belonging to the Canadian branch of ESPN, the count of instances with all ESPN URLs, further displaying the count in monthly buckets, and then separating by time of day or geographical areas within Canada. Thus generally, the report can include details of instances when the tagged image was presented to the user. Examples of details can be a time stamp each time the tagged image was rendered, brand(s) associated with the tagged image, and a number of times the tag was clicked by a customer. In step 1340, the process organizes and formats the results (e.g., from step 1338) into a useful format suitable for human or machine reading. The process ends at step 1342.
[0045] Further examples of machine or computer-readable media include, but are not limited to, recordable type media such as volatile and non-volatile memory devices, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Discs, (DVDs), etc.), among others, and transmission type media such as digital and analog communication links.
[0046] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of "including, but not limited to." As used herein, the terms "connected," "coupled," or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling or connection between the elements can be physical, logical, or a combination thereof. Additionally, the words "herein," "above," "below," and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of this application. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word "or," in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
[0047] The above detailed description of embodiments of the disclosure is not intended to be exhaustive or to limit the teachings to the precise form disclosed above. While specific embodiments of the disclosure are described above for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative embodiments may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or sub combinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times. Further any specific numbers noted herein are only examples: alternative implementations may employ differing values or ranges. [0048] The teachings of the disclosure provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various embodiments described above can be combined to provide further embodiments.
[0049] Any patents and applications and other references noted above, including any that may be listed in accompanying filing papers, are incorporated herein by reference. Aspects of the disclosure can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further embodiments of the disclosure.
[0050] These and other changes can be made to the disclosure in light of the above Detailed Description. While the above description describes certain embodiments of the disclosure, and describes the best mode contemplated, no matter how detailed the above appears in text, the teachings can be practiced in many ways. Details of the system may vary considerably in its implementation details, while still being encompassed by the subject matter disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the disclosure with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the disclosure to the specific embodiments disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the disclosure encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the disclosure under the claims.
[0051] While certain aspects of the disclosure are presented below in certain claim forms, the inventors contemplate the various aspects of the disclosure in any number of claim forms. For example, while only one aspect of the disclosure is recited as a means-plus-function claim under 35 U.S.C. §1 12, ]|6, other aspects may likewise be embodied as a means-plus-function claim, or in other forms, such as being embodied in a computer-readable medium. (Any claims intended to be treated under 35 U.S.C. §1 12, 1|6 will begin with the words "means for.") Accordingly, the applicant reserves the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the disclosure.

Claims

1 . A method for generating tagged images, the method comprising:
receiving, from a customer, a network address identifier corresponding to a first digital content;
hashing the first digital content for storage in a database;
receiving placement location of one or more visual tags in connection with one or more objects included in the first digital content;
receiving, from the customer, a second digital content that is embeddable in the first digital content;
placing the second digital content in the first digital content as the one or more visual tags in close proximity to the one or more objects; and generating tagged content including the one or more visual tags.
2. The method of claim, further comprising:
detecting that an end user is accessing a webpage linked to the first digital content, wherein the webpage is associated with a server of the customer; and
transmitting, to the server of the customer, the tagged content including the one or more visual tags for display on the webpage.
3. The method of claim 2, wherein the detecting is based on a software plugin included in the webpage.
4. The method of claim 2, wherein the detecting is based on a software extension installed in a web browser at a computer operated by the end user.
5. The method of claim 2, wherein the detecting is based on a browser or a software extension installed in a web browser at a mobile device operated by the end user.
6. The method of claim 1 , wherein the one or more visual tags are in extensible Markup Language (XML) format.
7. The method of claim 1 , wherein the webpage linked to the first digital content is a first webpage, and wherein the first digital content also includes a second webpage.
8. The method of claim 1 , wherein the second webpage includes at least one of: text, web page links, audio, or video.
9. The method of claim 1 , further comprising:
receiving a search request, wherein the search request includes a first tag; comparing the first tag with the one or more visual tags; and
upon determining a match, providing the tagged item to as part of a response to the search request.
10. The method of claim 1 , further comprising:
receiving, from a first user, a request to share the one or more visual tags with a second user; and
in response to the request, providing the one or more visual tags to the second user.
1 1 . The method of claim 1 , further comprising:
providing, to a user, the tagged content including the one or more visual tags; monitoring interactions of the user with the tagged content;
generating metadata associated with the interactions of the user with the
tagged content; and
saving the metadata for computation of analytics and user statistics.
12. The method of claim 1 , further comprising:
overlaying one or more shades of color on at least a portion of the tagged content without altering the first digital content.
PCT/US2018/037797 2017-06-16 2018-06-15 Transportable marketing content overlaid on digital media WO2018232270A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762521189P 2017-06-16 2017-06-16
US62/521,189 2017-06-16

Publications (1)

Publication Number Publication Date
WO2018232270A1 true WO2018232270A1 (en) 2018-12-20

Family

ID=64659530

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/037797 WO2018232270A1 (en) 2017-06-16 2018-06-15 Transportable marketing content overlaid on digital media

Country Status (1)

Country Link
WO (1) WO2018232270A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110099064A1 (en) * 2007-02-20 2011-04-28 Google Inc. Association of Ads with Tagged Audiovisual Content
US7975019B1 (en) * 2005-07-15 2011-07-05 Amazon Technologies, Inc. Dynamic supplementation of rendered web pages with content supplied by a separate source
US20150170245A1 (en) * 2013-12-13 2015-06-18 DFS Medialabs, LLC Media content instance embedded product marketing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7975019B1 (en) * 2005-07-15 2011-07-05 Amazon Technologies, Inc. Dynamic supplementation of rendered web pages with content supplied by a separate source
US20110099064A1 (en) * 2007-02-20 2011-04-28 Google Inc. Association of Ads with Tagged Audiovisual Content
US20150170245A1 (en) * 2013-12-13 2015-06-18 DFS Medialabs, LLC Media content instance embedded product marketing

Similar Documents

Publication Publication Date Title
US12080046B2 (en) Systems and methods for automatic image generation and arrangement using a machine learning architecture
US10416851B2 (en) Electronic publishing mechanisms
US9195753B1 (en) Displaying interest information
TWI479340B (en) Server apparatus, electronic apparatus, electronic book providing system, electronic book providing method, electronic book displaying method, and program
US10607235B2 (en) Systems and methods for curating content
US9047612B2 (en) Systems and methods for managing content associated with multiple brand categories within a social media system
US20120323704A1 (en) Enhanced world wide web-based communications
US8296291B1 (en) Surfacing related user-provided content
US8725559B1 (en) Attribute based advertisement categorization
US10360623B2 (en) Visually generated consumer product presentation
US20150242525A1 (en) System for referring to and/or embedding posts within other post and posts within any part of another post
US20190347287A1 (en) Method for screening and injection of media content based on user preferences
US20120260158A1 (en) Enhanced World Wide Web-Based Communications
US20150242374A1 (en) Automatic layout technology
US10853839B1 (en) Color-based content determination
EP1950670A1 (en) Document data display process method, document data display process system and software program for document data display process
US20180157763A1 (en) System and method for generating an electronic page
US20180322513A1 (en) Tracking performance of digital design asset attributes
US9871877B2 (en) Socially augmented browsing of a website
Semerádová et al. Website quality and shopping behavior: Quantitative and qualitative evidence
US20160379270A1 (en) Systems and methods for customized internet searching and advertising
US20240022790A1 (en) Digital content controller
WO2009024990A1 (en) System of processing portions of video stream data
JP2011209979A (en) Merchandise recommendation method and merchandise recommendation system
WO2018232270A1 (en) Transportable marketing content overlaid on digital media

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18817189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18817189

Country of ref document: EP

Kind code of ref document: A1