US20160180402A1 - Method for recommending products based on a user profile derived from metadata of multimedia content - Google Patents

Method for recommending products based on a user profile derived from metadata of multimedia content Download PDF

Info

Publication number
US20160180402A1
US20160180402A1 US14/627,264 US201514627264A US2016180402A1 US 20160180402 A1 US20160180402 A1 US 20160180402A1 US 201514627264 A US201514627264 A US 201514627264A US 2016180402 A1 US2016180402 A1 US 2016180402A1
Authority
US
United States
Prior art keywords
item
metadata
concept
attributes
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/627,264
Inventor
Mohammad Sabah
Mohammad Iman SADREDDIN
Shafaq Abdullah
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honest Company Inc
Original Assignee
Honest Company Inc
Insnap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honest Company Inc, Insnap Inc filed Critical Honest Company Inc
Priority to US14/627,264 priority Critical patent/US20160180402A1/en
Assigned to InSnap, Inc. reassignment InSnap, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ABDULLAH, SHAFAQ, SABAH, MOHAMMAD, SADREDDIN, MOHAMMAD IMAN
Publication of US20160180402A1 publication Critical patent/US20160180402A1/en
Assigned to THE HONEST COMPANY, INC. reassignment THE HONEST COMPANY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INSNAP INC.
Assigned to JPMORGAN CHASE BANK, N.A. reassignment JPMORGAN CHASE BANK, N.A. PATENT SECURITY AGREEMENT Assignors: THE HONEST COMPANY, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2379Updates performed during online database operations; commit processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2457Query processing with adaptation to user needs
    • G06F16/24578Query processing with adaptation to user needs using ranking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results

Definitions

  • Embodiments of the present disclosure generally relate to data analytics. More specifically, to recommending products based on a user profile derived from metadata of digital multimedia (e.g., images, videos, etc.).
  • digital multimedia e.g., images, videos, etc.
  • the images and videos can represent mementos of various times and places experienced in an individual's life.
  • mobile devices e.g., smart phones, tablets, etc.
  • mobile devices allow individuals to easily capture digital multimedia. For instance, cameras in mobile devices have steadily improved in quality and are can capture high-resolution images. Further, mobile devices now commonly have a storage capacity that can store thousands of images. And because individuals can easily carry smart phones around with them, they can take a greater number of images in many places.
  • One embodiment presented herein describes a method for identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files.
  • the method generally includes, extracting a product feed.
  • the product feed lists one or more items.
  • the method also includes identifying, for each item in the product feed, one or more attributes describing the item.
  • Each item is mapped to concepts of an interest taxonomy based on the identified one or more attributes for the item.
  • One or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files.
  • Each item is associated to one or more of the users based on the mapping.
  • inventions include, without limitation, a computer-readable medium that includes instructions that enable a processing unit to implement one or more aspects of the disclosed methods as well as a system having a processor, memory, and application programs configured to implement one or more aspects of the disclosed methods.
  • FIG. 1 illustrates an example computing environment, according to one embodiment.
  • FIG. 2 further illustrates the mobile application described relative to FIG. 1 , according to one embodiment.
  • FIG. 3 further illustrates the analysis tool described relative to FIG. 1 , according to one embodiment.
  • FIG. 4 further illustrates the product feed extractor described relative to FIG. 1 , according to one embodiment.
  • FIG. 5 illustrates a method for building an interest taxonomy across a userbase, according to one embodiment.
  • FIG. 6 illustrates a method for inferring user interests from concepts derived based on image metadata, according to one embodiment.
  • FIG. 7 illustrates a method for recommending products based on inferred interests derived from image metadata, according to one embodiment.
  • FIG. 8 illustrates an application server computing system, according to one embodiment.
  • Embodiments presented herein describe techniques for recommending products to users based on user interests inferred from image metadata.
  • Digital images provide a wealth of information valuable to third parties (e.g., advertisers, marketers, and the like). For example, assume an individual takes pictures at a golf course using a mobile device (e.g., a smart phone, tablet, etc.). Further, assume that the pictures are the only indication the individual was at the golf course (e.g., because the individual made only cash purchases and signed no registers). Metadata associated with this image can place the individual at the golf course at a specific time. Further, event data could be used to correlate whether there was going on at that time (e.g., a specific tournament). Such information may be useful to third parties, e.g., for targeted advertising and recommendations.
  • an advertiser might not be able to identify an effective audience for targeting a given product or service based on such information alone. Even if image metadata places an individual at a golf course at a particular point of time, the advertiser might draw inaccurate inferences about the individual. For example, the advertiser might assume that because the metadata places the individual at a high-end golf course, the individual is interested in high-end golf equipment. The advertiser might then recommend other high-end equipment or other golf courses to that individual. If the individual rarely plays golf or does not usually spend money at high-end locations. Such recommendations may lead to low conversion rates for the advertiser. Historically, advertisers have been generally forced to accept low conversation rates, as techniques for identifying individuals likely to be receptive to or interested in a given product or service are often ineffective.
  • Embodiments presented herein describe techniques for recommending products based on user interests inferred from metadata of digital multimedia (e.g., images and videos).
  • a multimedia service platform provides a mobile application which allows users to upload digital multimedia files and metadata to the platform from a mobile device. Further, the multimedia service platform may identify patterns from metadata extracted from images and videos. The metadata may describe where and when a given multimedia file was taken. Further, in many cases, embodiments presented herein can identify latent relationships between user interests from collections of metadata from multiple users. For example, if many users who take pictures at golf courses also take pictures at an unrelated event (e.g., take pictures of a traveling museum exhibit) then the system disclosed herein can discover a relationship between the interests. Thereafter, advertising related to golfing products and services could be targeted to individuals who publish pictures of the travelling museum exhibit, regardless of any other known interest in golf.
  • unrelated event e.g., take pictures of a traveling museum exhibit
  • the multimedia service platform evaluates metadata corresponding to each image or video submitted to the platform against a knowledge graph.
  • the knowledge graph provides a variety of information about events, places, dates, times, etc. that may be compared with metadata of the image or video.
  • the knowledge graph may include weather data, location data, event data, and online encyclopedia data.
  • attributes associated with an event may include a name, location, start time, end time, price range, etc.
  • the multimedia service platform correlates spatiotemporal metadata from a digital image or video with a specific event in the knowledge graph. That is, the knowledge graph is used to impute attributes related to events, places, dates, times, etc., to a given digital image or video based on the metadata provided with that image or video.
  • the analysis tool represents attributes imputed to digital multimedia from a user base in a user-attribute matrix, where each row of the matrix represents a distinct user and each column represents an attribute from the knowledge graph that can be imputed to a digital multimedia file.
  • the analysis tool may add columns to the user-attribute matrix as additional attributes are identified.
  • the cells of a given row indicate how many times a given attribute has been imputed to a digital multimedia file published by a user corresponding to that row. Accordingly, when the analysis tool imputes an attribute to a digital multimedia file (based on the file metadata), a value for that attribute is incremented in the user-attribute matrix. Doing so allows the multimedia service platform to identify useful information about that user.
  • the analysis tool may identify that a user often attends sporting events, movies, participates in a particular recreational event (e.g., skiing or golf), etc.
  • the analysis tool may identify information about events that the user attends, such as whether the events are related to a given sports team, whether the events are related to flights from an airport, a range specifying how much the event may cost, etc.
  • the multimedia service platform may learn concepts.
  • a concept is a collection of one or more identified attributes.
  • the multimedia service platform may perform machine learning techniques to learn concepts from the attributes of the user-attribute matrix. For example, the multimedia service platform may score an attribute to each respective concept.
  • the multimedia service platform may associate attributes that satisfy specified criteria (e.g., the top five scores per concept, attributes exceeding a specified threshold, etc.) to a given concept.
  • an interest taxonomy is a hierarchical representation of user interests based on the concepts.
  • the interest taxonomy can identify general groups (e.g., sports, music, and travel) and sub-groups (e.g., basketball, rock music, and discount airlines) of interest identified from the concepts.
  • the multimedia service platform may use the interest taxonomy to discover latent relationships between concepts. For example, the multimedia service platform may build a predictive learning model using the interest taxonomy. The multimedia service platform could train the predictive learning model using existing user-to-concept associations. Doing so would allow the multimedia service platform use the model to predict associations for users to other concepts that the user is not currently associated with.
  • the multimedia service platform may map distinct product and service feeds of third parties (e.g., retailers, travel services, venues, etc.) to the user interest taxonomy to identify products and services to recommend to a given user.
  • a product feed is a listing of items that are provided commercially.
  • a product feed of a clothing retailer may list items such as shirts, pants, shoes, and accessories.
  • each item may contain various information about the item, such as a name of the item, type of the item, price of the item, size information for the item, description of the item, and the like.
  • the product feed may be hosted on a website of the third party or be provided by the third party to the multimedia service platform.
  • a product feed extractor of multimedia service platform retrieves a product feed from a third party system, such as from a web server of a retailer.
  • the product feed extractor evaluates each item in the product feed to identify item attributes.
  • the product feed extractor may build an item-attribute matrix, where rows represent items and columns represent attributes. Each cell includes a bit representing whether a given item has a given attribute.
  • the product feed extractor determines a mapping for each product to a concept, if available, based on the item-attribute matrix.
  • the product feed extractor may then identify users that may be interested in a given item based on whether a user is associated with a corresponding concept.
  • FIG. 1 illustrates an example computing environment 100 , according to one embodiment.
  • the computing environment 100 includes one or more mobile devices 105 , an extract, transform, and load (ETL) server 110 , an application server 115 , and one or more third party systems 125 , connected to a network 130 (e.g., the Internet).
  • ETL extract, transform, and load
  • the mobile devices 105 include a mobile application 106 which allows users to interact with a multimedia service platform (represented by the ETL server 110 and the application server 115 ).
  • the mobile application 106 is developed by a third-party organization (e.g., a retailer, social network provider, fitness tracker developer, etc.).
  • the mobile application 106 may send images 108 and associated metadata to the multimedia service platform, e.g., through a software development kit (SDK) provided by the multimedia service platform.
  • SDK software development kit
  • the mobile application 106 may access a social media service (application service 116 ) provided by the multimedia service platform.
  • the social media service allows users to capture, share, and comment on images 108 as a part of existing social networks (or in junction) with those social networks.
  • a user can link a social network account to the multimedia service platform through application 106 . Thereafter, the user may capture a number of images and submit the images 108 to the social network.
  • the application 106 retrieves the metadata from the submitted images. Further, the mobile application 106 can send images 108 and metadata to the multimedia service platform.
  • the multimedia service platform uses the metadata to infer latent interests of the user.
  • the mobile application 106 extracts Exchangeable Image Format (EXIF) metadata from each image 108 .
  • the mobile application 106 can also extract other metadata (e.g., PHAsset metadata in Apple iOS devices) describing additional information, such as GPS data.
  • the mobile application 106 may perform extract, transform, and load (ETL) operations on the metadata to format the metadata for use by components of the multimedia service platform. For example, the mobile application 106 may determine additional information based on the metadata, such as whether a given image was taken during daytime or nighttime, whether the image was taken indoors or outdoors, whether the image is a “selfie,” etc. Further, the mobile application 106 also retrieves metadata describing application use.
  • Such metadata includes activity by the user on the mobile application 106 , such as image views, tagging, etc. Further, as described below, the mobile application 106 provides functionality that allows a user to search through a collection of images by the additional metadata, e.g., searching a collection of images that are “selfies” and taken in the morning.
  • the ETL server 110 includes an ETL application 112 .
  • the ETL application 112 receives streams of image metadata 114 (e.g., the EXIF metadata, PHAsset metadata, and additional metadata) from mobile devices 105 . Further, the ETL application 112 cleans, stores, and indexes the image metadata 114 for use by the application server 115 . Once processed, the ETL application 112 may store the image metadata 114 in a data store (e.g., such as in a database) for access by the application server 115 .
  • the ETL server 110 may be a physical computing system or a virtual machine computing instance in the cloud.
  • the application server 110 may comprise multiple servers configured as a cluster (e.g., via the Apache Spark framework on top of Hadoop-based storage). This architecture allows the application servers 110 to process large amounts of images and image metadata sent from mobile applications 106 .
  • an application service 116 communicates with the mobile application 106 .
  • the application server 115 may be a physical computing system or a virtual machine computing instance in the cloud. Although depicted as a single server, the application server 115 may comprise multiple servers configured as a cluster (e.g., via the Apache Spark framework on top of a Hadoop-based storage architecture). This architecture allows the application servers 115 to process large amounts of images and image metadata sent from mobile applications 106 .
  • the application server 115 includes an analysis tool 117 , a knowledge graph 118 , and a user interest taxonomy 119 .
  • the analysis tool 117 generates the user interest taxonomy 119 based on image metadata 114 from image collections of multiple users.
  • the user interest taxonomy 119 represents interests inferred from image attributes identified from the knowledge graph 118 .
  • the knowledge graph 118 includes a collection of attributes which may be imputed to an image. Examples of attributes include time and location information, event information, genres, price ranges, weather, subject matter, and the like.
  • the analysis tool 117 builds the knowledge graph 118 using weather data, location data, events data, encyclopedia data, and the like from a variety of data sources.
  • the analysis tool 117 imputes attributes from the knowledge graph 118 to the images 108 based on the metadata 114 . That is, the analysis tool 117 may correlate time and location information in image metadata 114 to attributes in the knowledge graph 118 . For example, assume that a user captures an image 108 of a baseball game. Metadata 114 for that image 108 may include a GPS, a date, and a time when the image 108 was captured. The analysis tool 117 can correlate this information to attributes such as weather conditions at that time and location (e.g., “sunny”), an event name (e.g., “Dodgers Game”), teams playing at that game (e.g., “Dodgers” and “Cardinals”), etc.
  • weather conditions at that time and location e.g., “sunny”
  • an event name e.g., “Dodgers Game”
  • teams playing at that game e.g., “Dodgers” and “Cardinals”
  • the analysis tool 117 associates the imputed attributes with the user who took the image. As noted, e.g., a row in a user attribute matrix may be updated to reflect the imputed attributes of each new image taken by that user. Further, the analysis tool 117 may perform machine learning techniques, such as latent Dirichlet analysis (LDA), to decompose the user-attribute matrix into sub-matrices. Doing so allows the analysis tool 117 to identify concepts, i.e., clusters of attributes.
  • LDA latent Dirichlet analysis
  • the product feed extractor 120 may use the user interest taxonomy 119 to identify commercial products and services of a third party (e.g., a retailer, airlines company, health and fitness organization, etc.) that may be of interest to a user.
  • a third party e.g., a retailer, airlines company, health and fitness organization, etc.
  • the product feed extractor 120 may retrieve information from a product feed 127 of a third party system 125 .
  • the product feed 127 is a listing of commercial products or services of a third party, such as those of a retailer.
  • a product feed 127 of a shoe retailer may list items such as dress shoes, casual shoes, sports shoes, etc.
  • each item may contain various information about the item, such as a name of the item, type of the item, price of the item, size information for the item, description of the item, and the like.
  • the product feed extractor 120 may identify, from the product feed 127 , one or more attributes describing each product.
  • a product of a shoe retailer may have attributes such as “shoe,” “running,” “menswear,” and so on.
  • the product feed extractor 120 can map the attributes of the product feed 127 with concepts in the interest taxonomy 119 . Doing so allows the analysis tool 117 to identify products and services from the feed 127 that align with certain user interests identified in the interest taxonomy. As a result, third parties can target users who may be interested in the identified products and services.
  • FIG. 2 illustrates mobile application 106 , according to one embodiment.
  • mobile application 106 includes a SDK component 200 used to send image and metadata information to the multimedia service platform.
  • the SDK component 200 further includes an extraction component 205 , a search and similarity component 210 , and a log component 215 .
  • the extraction component 205 extracts metadata (e.g., EXIF metadata, PHAsset metadata, and the like) from images captured using a mobile device 105 .
  • the extraction component 205 may perform ETL preprocessing operations on the metadata.
  • the extraction component 205 may format the metadata for the search and similarity component 210 and the log component 215 .
  • the search and similarity component 210 infers additional metadata from an image based on the metadata (e.g., spatiotemporal metadata) retrieved by the extraction component 205 .
  • additional metadata include whether a given image was captured at daytime or nighttime, whether the image was captured indoors or outdoors, whether the image was edited, weather conditions when the image was captured, etc.
  • the search and similarity component 210 generates a two-dimensional image feature map from a collection of images captured on a given mobile device 105 , where each row represents an image and columns represent metadata attributes. Cells of the map indicate whether an image has a particular attribute.
  • the image feature map allows the search and similarity component 210 to provide analytics and search features for the collection of images captured by a mobile device.
  • a user of the mobile application 106 may search for images on their mobile device which have a given attribute, such as images taken during daytime or taken from a particular location.
  • the search and similarity component 210 may evaluate the image map to identify photos having such an attribute.
  • the log component 215 evaluates the image metadata. For example, the log component 215 records metadata sent to the ETL server 110 . Once received, the application 112 performs ETL operations, e.g., loading the metadata into a data store (such as a database). The metadata is accessible by the analysis tool 117 .
  • FIG. 3 further illustrates the analysis tool 117 , according to one embodiment.
  • the analysis tool 117 includes an aggregation component 305 , a knowledge graph component 310 , a taxonomy component 320 , and a user interest inference component 325 .
  • the aggregation component 305 receives streams of image metadata corresponding to images captured by users of application 106 by users from the ETL server 110 . Once received, the aggregation component 305 organizes images and metadata by user.
  • the metadata may include both raw image metadata (e.g., time and GPS information) and inferred metadata (e.g., daytime or nighttime image, indoor or outdoor image, “selfie” image, etc.).
  • the aggregation component 305 evaluates log data from the ETL server 110 to identify image metadata from different devices (and presumably different users) and metadata type (e.g., whether the metadata corresponds to image metadata or application usage data).
  • the knowledge graph component 310 (and later maintains) the knowledge graph 118 using any suitable data source, such as local news and media websites, online event schedules for performance venues, calendars published by schools, government, or private enterprises, online schedules and ticket sales.
  • the knowledge graph component 310 determines a set of attributes related to each event to store in the knowledge graph 118 .
  • the knowledge graph component 310 evaluates time and location metadata of the image against the knowledge graph 118 .
  • the knowledge graph component 310 determines whether the image metadata matches a location and/or event in the knowledge graph.
  • the information may be matched using a specified spatiotemporal range, e.g., within a time period of the event, within a set of GPS coordinate range, etc.
  • the knowledge graph component 310 may further match the information based on a similarity of metadata of other user photos that have been matched to that event.
  • the taxonomy component 320 evaluates the user-attribute matrix to determine concepts associated with a given user. As stated, a concept is a cluster of related attributes.
  • the interest taxonomy generation component 320 may perform machine learning techniques, such as Latent Dirichlet Analysis (LDA), Non-Negative Matrix Factorization (NNMF), Deep Learning algorithms, and the like, to decompose the user-attribute matrix into sub-matrices.
  • LDA Latent Dirichlet Analysis
  • NNMF Non-Negative Matrix Factorization
  • Deep Learning algorithms and the like.
  • the taxonomy component 320 evaluates the sub-matrices to identify latent concepts from co-occurring attributes.
  • the taxonomy component 320 may determine a score distribution for each attribute over each concept.
  • the taxonomy component 320 may populate a concept-attribute matrix, where the concepts are rows and attributes are columns. Each cell value is the membership score of the respective attribute to the respective concept.
  • the taxonomy component 320 may perform further machine learning techniques (e.g., LDA, NNMF, Deep Learning algorithms, etc.) to identify relationships and hierarchies between each concepts.
  • the interest inference component 325 builds a learning model based on the identified concepts and the users. To do so, the interest inference component 325 may train Support Vector Machine (SVM) classifiers for each concept to determine user association in one or more concepts. Doing so results in each user in the platform being assigned an interest score per concept.
  • SVM Support Vector Machine
  • the interest inference component 325 may predict user interests using the learning model. As the multimedia service platform receives image metadata from new users, the interest inference component 325 can assign the new users with scores for each concept based on the metadata and the learning model. A user having a high membership score in a given concept may indicate a high degree of interest for that concept.
  • the interest inference component 325 may build a user-concept matrix, where rows represent users and columns represent concepts. A cell in the matrix represent a score for a given user-concept combination.
  • FIG. 4 further illustrates the product feed extractor 120 , according to one embodiment.
  • the product feed extractor 120 includes a retrieval component 405 , an evaluation component 410 , a mapping component 415 , and an identification component 420 .
  • the retrieval component 405 extracts a product feed from a system of a third party organization, such as a website of a retailer, fitness organization, or travel company.
  • a product feed is a product inventory provided on a website of a sports clothing retailer.
  • a product feed is a listing of commercial products or services provided by the organization.
  • the product feed of a sports clothing retailer includes items such as running shoes, basketball shorts, baseball caps, etc.
  • each item in the product feed may include information associated with the item, such as a name of the item, a price of the item, an average rating of the item by consumers, price, a type of the item, a description of the item, and the like.
  • the transformation component 410 determines one or more attributes of each item to associate with the item.
  • attributes of a given item may include “shoes,” “black,” “running,” “menswear,” and so on.
  • the transformation component 410 may perform NLP techniques such as tokenization, lexical analysis, semantic analysis, and pattern matching, to identify attributes.
  • the transformation component 410 builds an item-attribute matrix, where rows represent evaluated items and columns represent attributes. If a given item is associated with a given attribute, the transformation component 410 flags the corresponding cell value as 1 (and 0 if the attribute is not present).
  • the mapping component 415 associates item attributes to concepts of the interest taxonomy.
  • the mapping component 415 may perform NLP and Machine Learning techniques to determine word space model distance of a given item attribute from a concept.
  • the mapping component 415 can determine a score based on such distances.
  • the mapping component 415 may associate an attribute having score that exceeds a given threshold for a given concept with that concept.
  • the mapping component 415 may build an item-concept matrix, where rows represent items and columns represent concepts. Cells represent a concept score for a given item-concept combination.
  • the identification component 420 determines one or more products that may be of interest to a given user. To do so, the identification component 420 may evaluate a dot product between a user vector in the user-concept matrix and the product-concept matrix. The identification component 420 may determine that products exceeding a threshold score for that user indicates that a user may have interest in that product. A third party may use such information to target specific recommendations for that product to the user.
  • FIG. 5 illustrates a method 500 for building an interest taxonomy across a userbase, according to one embodiment.
  • Method 500 begins at step 505 , where the aggregation component 305 segments images by users. Doing so allows the analysis tool 107 to evaluate collections of image metadata for each user individually.
  • the knowledge graph component 310 imputes attributes from the knowledge graph 118 onto the images based on the image metadata. To do so, the graph component 310 correlates time and location metadata of a given image to information provided in the knowledge graph, such as events, that coincide with the time and location metadata (with a degree of allowance). As a result, each image is associated with a set of attributes.
  • the knowledge graph component 310 builds a user-attribute matrix based on the imputed attributes to the images.
  • the knowledge graph component 310 further imputes attributes associated with each image to the respective user.
  • Each cell in the user-attribute matrix is an incremental value that represents a count of images in which the corresponding attribute is present.
  • the interest taxonomy generation component 320 decomposes the user-attribute matrix to identify concepts from the attributes.
  • a concept may include one or more attributes.
  • the interest taxonomy generation component 320 may evaluate the attributes using machine learning techniques to identify the concepts. Further, the interest taxonomy generation component 320 may generate an attribute-concept matrix, where the cell values represent membership scores of each attribute to a given concept. Attributes having a qualifying score may be associated with the concept.
  • FIG. 6 illustrates a method 600 for inferring user interests from concepts derived based on image metadata, according to one embodiment.
  • Method 600 begins at step 605 , where the analysis tool 117 determines, for each user, an interest score for each concept relative to other concepts. To do so, the analysis tool 117 may calculate a dot product between a user vector in the user-attribute matrix to the attribute-concept matrix.
  • the analysis tool 117 assigns each user to one or more concepts based on the interest scores. To do so, the analysis tool 117 may determine whether a given interest score exceeds a threshold for that concept. And if so, the analysis tool 117 associates the user with that concept.
  • the analysis tool 117 trains multiple one-versus-all predictive models for inferring user interests.
  • the analysis tool 117 may use associations between a user and a concept as positive examples for association to that concept.
  • the analysis tool 117 may also use lack of associations between a user and a concept as negative examples.
  • FIG. 7 illustrates a method 700 for recommending products based on inferred interests derived from image metadata, according to one embodiment.
  • the retrieval component 405 extracts a product feed of a third party system.
  • the product feed includes items such as running shoes, basketball shorts, baseball caps, etc.
  • each item in the product feed includes information associated with the item (e.g., a name of the item, a price of the item, an average rating of the item by consumers, price, a type of the item, a description of the item, etc.).
  • the transformation component 410 determines a set of attributes for each item.
  • the transformation component 410 performs NLP techniques over the raw text associated with a given item, such as tokenization, lexical and semantic analysis, pattern matching, and so on. Doing so results in a set of attributes for each item (e.g., “outerwear,” “shoes,” “Mercury 7 ,” “menswear,” “running,” etc.). Further, the transformation component 410 builds an item-attribute matrix, where rows represent evaluated items and columns represent attributes. As stated, if a given item is associated with a given attribute, the transformation component 410 flags the corresponding cell value as 1 (and 0 if the attribute is not present).
  • the mapping component 415 associates product feed attributes with learned concepts of the interest taxonomy. To do so, the mapping component 415 determines a word space model distance of a given item attribute from a concept. Further, the mapping component 415 determines a score based on such distances. The mapping component 415 associates an attribute having score that exceeds a given threshold for a given concept with that concept. The mapping component 415 populates an item-concept matrix, where rows represent items and columns represent concepts. Cells represent a concept score for a given item-concept combination.
  • the identification component 420 determines which users to target for a given product based on the associations. To do so, the identification component 420 evaluates a dot product of a user vector of the user-concept matrix and the product-concept matrix. The identification component 420 may determine that products exceeding a threshold score for that user indicates that a user may have interest in that product. As a result, a third party may use such information to target specific recommendations for that product to the user.
  • FIG. 8 illustrates an application server computing system 800 , according to one embodiment.
  • the computing system 800 includes, without limitation, a central processing unit (CPU) 805 , a network interface 815 , a memory 820 , and storage 830 , each connected to a bus 817 .
  • the computing system 800 may also include an I/O device interface 810 connecting I/O devices 812 (e.g., keyboard, mouse, and display devices) to the computing system 800 .
  • I/O device interface 810 connecting I/O devices 812 (e.g., keyboard, mouse, and display devices) to the computing system 800 .
  • the computing elements shown in computing system 800 may correspond to a physical computing system (e.g., a system in a data center) or may be a virtual computing instance executing within a computing cloud.
  • the CPU 805 retrieves and executes programming instructions stored in the memory 820 as well as stores and retrieves application data residing in the memory 820 .
  • the interconnect 817 is used to transmit programming instructions and application data between the CPU 805 , I/O devices interface 810 , storage 830 , network interface 815 , and memory 820 .
  • CPU 805 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like.
  • the memory 820 is generally included to be representative of a random access memory.
  • the storage 830 may be a disk drive storage device. Although shown as a single unit, the storage 830 may be a combination of fixed and/or removable storage devices, such as fixed disc drives, removable memory cards, or optical storage, network attached storage (NAS), or a storage area-network (SAN).
  • NAS network attached storage
  • SAN storage area-network
  • the memory 820 includes an application service 822 , an analysis tool 824 , and a product feed extractor 826 .
  • the storage 830 includes a knowledge graph 834 , and a user interest taxonomy 836 .
  • the application service 822 provides access to various services of a multimedia service platform to mobile devices.
  • the analysis tool 824 generates a user interest taxonomy 836 based on metadata of images taken by users.
  • the analysis tool 824 builds the knowledge graph 834 from external data sources. To do so, the analysis tool 824 performs NLP techniques on the raw text obtained from the data sources to identify relevant terms related to events, moments, weather, etc. Further, the analysis tool 824 may impute information from the knowledge graph 834 images submitted to the multimedia service platform. In addition, the analysis tool 824 generates a user interest taxonomy 836 of concepts inferred from the attributes. To do so, the analysis tool 824 may perform machine learning techniques to identify concepts based on co-occurring attributes. In addition, the analysis tool 824 may determine a membership score for each attribute to each identified concept. The analysis tool 824 may associate attributes to a given concept based on the membership score. Further, the analysis tool 824 may identify hierarchical relationships between the concepts through machine learning.
  • the product feed extractor 826 identifies commercial products and services of a third party that may be of interest to a user, based on the user interest taxonomy 836 .
  • the product feed extractor 823 may retrieve information from a product feed of a third party system (e.g., of a retailer).
  • the product feed extractor 836 may identify, from the product feed 127 , one or more attributes describing each product.
  • the product feed extractor 836 can map the attributes of the product feed with concepts in the interest taxonomy 836 . Doing so allows the analysis tool 824 to identify products and services from the feed that align with certain user interests identified in the interest taxonomy. As a result, third parties can target users who may be interested in the identified products and services.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Library & Information Science (AREA)
  • Strategic Management (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Game Theory and Decision Science (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Techniques disclosed herein describe identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files. A product feed extractor extracts a product feed. The product feed lists one or more items. The product feed extractor identifies, for each item in the product feed, one or more attributes describing the item. Each item is mapped to concepts of an interest taxonomy based on the identified one or more attributes for the item. One or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files. Each item is associated to one or more of the users based on the mapping.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to U.S. Provisional Application No. 62/093,372, filed Dec. 17, 2014. The content of the aforementioned application is incorporated by reference in its entirety.
  • BACKGROUND
  • 1. Field
  • Embodiments of the present disclosure generally relate to data analytics. More specifically, to recommending products based on a user profile derived from metadata of digital multimedia (e.g., images, videos, etc.).
  • 2. Description of the Related Art
  • Individuals take images and videos to capture personal experiences and events. The images and videos can represent mementos of various times and places experienced in an individual's life.
  • In addition, mobile devices (e.g., smart phones, tablets, etc.) allow individuals to easily capture digital multimedia. For instance, cameras in mobile devices have steadily improved in quality and are can capture high-resolution images. Further, mobile devices now commonly have a storage capacity that can store thousands of images. And because individuals can easily carry smart phones around with them, they can take a greater number of images in many places.
  • All of this has resulted in an explosion of images, and metadata describing images, as virtually anyone can capture and share digital images via text message, image services, social media, and the like. This volume of digital images, now readily available, provides variety of information valuable to third parties, such as advertisers, marketers, and the like.
  • SUMMARY
  • One embodiment presented herein describes a method for identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files. The method generally includes, extracting a product feed. The product feed lists one or more items. The method also includes identifying, for each item in the product feed, one or more attributes describing the item. Each item is mapped to concepts of an interest taxonomy based on the identified one or more attributes for the item. One or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files. Each item is associated to one or more of the users based on the mapping.
  • Other embodiments include, without limitation, a computer-readable medium that includes instructions that enable a processing unit to implement one or more aspects of the disclosed methods as well as a system having a processor, memory, and application programs configured to implement one or more aspects of the disclosed methods.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only exemplary embodiments and are therefore not to be considered limiting of its scope, may admit to other equally effective embodiments.
  • FIG. 1 illustrates an example computing environment, according to one embodiment.
  • FIG. 2 further illustrates the mobile application described relative to FIG. 1, according to one embodiment.
  • FIG. 3 further illustrates the analysis tool described relative to FIG. 1, according to one embodiment.
  • FIG. 4 further illustrates the product feed extractor described relative to FIG. 1, according to one embodiment.
  • FIG. 5 illustrates a method for building an interest taxonomy across a userbase, according to one embodiment.
  • FIG. 6 illustrates a method for inferring user interests from concepts derived based on image metadata, according to one embodiment.
  • FIG. 7 illustrates a method for recommending products based on inferred interests derived from image metadata, according to one embodiment.
  • FIG. 8 illustrates an application server computing system, according to one embodiment.
  • To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.
  • DETAILED DESCRIPTION
  • Embodiments presented herein describe techniques for recommending products to users based on user interests inferred from image metadata. Digital images provide a wealth of information valuable to third parties (e.g., advertisers, marketers, and the like). For example, assume an individual takes pictures at a golf course using a mobile device (e.g., a smart phone, tablet, etc.). Further, assume that the pictures are the only indication the individual was at the golf course (e.g., because the individual made only cash purchases and signed no registers). Metadata associated with this image can place the individual at the golf course at a specific time. Further, event data could be used to correlate whether there was going on at that time (e.g., a specific tournament). Such information may be useful to third parties, e.g., for targeted advertising and recommendations.
  • However, an advertiser might not be able to identify an effective audience for targeting a given product or service based on such information alone. Even if image metadata places an individual at a golf course at a particular point of time, the advertiser might draw inaccurate inferences about the individual. For example, the advertiser might assume that because the metadata places the individual at a high-end golf course, the individual is interested in high-end golf equipment. The advertiser might then recommend other high-end equipment or other golf courses to that individual. If the individual rarely plays golf or does not usually spend money at high-end locations. Such recommendations may lead to low conversion rates for the advertiser. Historically, advertisers have been generally forced to accept low conversation rates, as techniques for identifying individuals likely to be receptive to or interested in a given product or service are often ineffective.
  • Embodiments presented herein describe techniques for recommending products based on user interests inferred from metadata of digital multimedia (e.g., images and videos). In one embodiment, a multimedia service platform provides a mobile application which allows users to upload digital multimedia files and metadata to the platform from a mobile device. Further, the multimedia service platform may identify patterns from metadata extracted from images and videos. The metadata may describe where and when a given multimedia file was taken. Further, in many cases, embodiments presented herein can identify latent relationships between user interests from collections of metadata from multiple users. For example, if many users who take pictures at golf courses also take pictures at an unrelated event (e.g., take pictures of a traveling museum exhibit) then the system disclosed herein can discover a relationship between the interests. Thereafter, advertising related to golfing products and services could be targeted to individuals who publish pictures of the travelling museum exhibit, regardless of any other known interest in golf.
  • In one embodiment, the multimedia service platform evaluates metadata corresponding to each image or video submitted to the platform against a knowledge graph. The knowledge graph provides a variety of information about events, places, dates, times, etc. that may be compared with metadata of the image or video. For example, the knowledge graph may include weather data, location data, event data, and online encyclopedia data. For instance, attributes associated with an event may include a name, location, start time, end time, price range, etc. The multimedia service platform correlates spatiotemporal metadata from a digital image or video with a specific event in the knowledge graph. That is, the knowledge graph is used to impute attributes related to events, places, dates, times, etc., to a given digital image or video based on the metadata provided with that image or video.
  • In one embodiment, the analysis tool represents attributes imputed to digital multimedia from a user base in a user-attribute matrix, where each row of the matrix represents a distinct user and each column represents an attribute from the knowledge graph that can be imputed to a digital multimedia file. The analysis tool may add columns to the user-attribute matrix as additional attributes are identified. The cells of a given row indicate how many times a given attribute has been imputed to a digital multimedia file published by a user corresponding to that row. Accordingly, when the analysis tool imputes an attribute to a digital multimedia file (based on the file metadata), a value for that attribute is incremented in the user-attribute matrix. Doing so allows the multimedia service platform to identify useful information about that user. For instance, the analysis tool may identify that a user often attends sporting events, movies, participates in a particular recreational event (e.g., skiing or golf), etc. In addition, the analysis tool may identify information about events that the user attends, such as whether the events are related to a given sports team, whether the events are related to flights from an airport, a range specifying how much the event may cost, etc.
  • In one embodiment, the multimedia service platform may learn concepts. A concept is a collection of one or more identified attributes. The multimedia service platform may perform machine learning techniques to learn concepts from the attributes of the user-attribute matrix. For example, the multimedia service platform may score an attribute to each respective concept. The multimedia service platform may associate attributes that satisfy specified criteria (e.g., the top five scores per concept, attributes exceeding a specified threshold, etc.) to a given concept.
  • Further, the analysis tool may generate an interest taxonomy based on the user-attribute matrix. In one embodiment, an interest taxonomy is a hierarchical representation of user interests based on the concepts. For example, the interest taxonomy can identify general groups (e.g., sports, music, and travel) and sub-groups (e.g., basketball, rock music, and discount airlines) of interest identified from the concepts.
  • The multimedia service platform may use the interest taxonomy to discover latent relationships between concepts. For example, the multimedia service platform may build a predictive learning model using the interest taxonomy. The multimedia service platform could train the predictive learning model using existing user-to-concept associations. Doing so would allow the multimedia service platform use the model to predict associations for users to other concepts that the user is not currently associated with.
  • Further, the multimedia service platform may map distinct product and service feeds of third parties (e.g., retailers, travel services, venues, etc.) to the user interest taxonomy to identify products and services to recommend to a given user. Generally, a product feed is a listing of items that are provided commercially. For example, a product feed of a clothing retailer may list items such as shirts, pants, shoes, and accessories. Further, each item may contain various information about the item, such as a name of the item, type of the item, price of the item, size information for the item, description of the item, and the like. The product feed may be hosted on a website of the third party or be provided by the third party to the multimedia service platform.
  • In one embodiment, a product feed extractor of multimedia service platform retrieves a product feed from a third party system, such as from a web server of a retailer. The product feed extractor evaluates each item in the product feed to identify item attributes. The product feed extractor may build an item-attribute matrix, where rows represent items and columns represent attributes. Each cell includes a bit representing whether a given item has a given attribute. The product feed extractor determines a mapping for each product to a concept, if available, based on the item-attribute matrix. The product feed extractor may then identify users that may be interested in a given item based on whether a user is associated with a corresponding concept.
  • Note, the following description relies on digital images captured by a user and metadata as a reference example of determining product recommendations based on a user profile derived from image metadata. However, one of skill in the art will recognize that the embodiments presented herein may be adapted to other digital multimedia that include time and location metadata, such as digital videos captured on a mobile device. Further, an analysis tool may be able to extract additional metadata features from such videos, such as the length of the video, which can be used relative to the techniques described herein.
  • FIG. 1 illustrates an example computing environment 100, according to one embodiment. As shown, the computing environment 100 includes one or more mobile devices 105, an extract, transform, and load (ETL) server 110, an application server 115, and one or more third party systems 125, connected to a network 130 (e.g., the Internet).
  • In one embodiment, the mobile devices 105 include a mobile application 106 which allows users to interact with a multimedia service platform (represented by the ETL server 110 and the application server 115). In one embodiment, the mobile application 106 is developed by a third-party organization (e.g., a retailer, social network provider, fitness tracker developer, etc.). The mobile application 106 may send images 108 and associated metadata to the multimedia service platform, e.g., through a software development kit (SDK) provided by the multimedia service platform.
  • In another embodiment, the mobile application 106 may access a social media service (application service 116) provided by the multimedia service platform. The social media service allows users to capture, share, and comment on images 108 as a part of existing social networks (or in junction) with those social networks. For example, a user can link a social network account to the multimedia service platform through application 106. Thereafter, the user may capture a number of images and submit the images 108 to the social network. In turn, the application 106 retrieves the metadata from the submitted images. Further, the mobile application 106 can send images 108 and metadata to the multimedia service platform. The multimedia service platform uses the metadata to infer latent interests of the user.
  • In any case, the mobile application 106 extracts Exchangeable Image Format (EXIF) metadata from each image 108. The mobile application 106 can also extract other metadata (e.g., PHAsset metadata in Apple iOS devices) describing additional information, such as GPS data. In addition, the mobile application 106 may perform extract, transform, and load (ETL) operations on the metadata to format the metadata for use by components of the multimedia service platform. For example, the mobile application 106 may determine additional information based on the metadata, such as whether a given image was taken during daytime or nighttime, whether the image was taken indoors or outdoors, whether the image is a “selfie,” etc. Further, the mobile application 106 also retrieves metadata describing application use. Such metadata includes activity by the user on the mobile application 106, such as image views, tagging, etc. Further, as described below, the mobile application 106 provides functionality that allows a user to search through a collection of images by the additional metadata, e.g., searching a collection of images that are “selfies” and taken in the morning.
  • In one embodiment, the ETL server 110 includes an ETL application 112. The ETL application 112 receives streams of image metadata 114 (e.g., the EXIF metadata, PHAsset metadata, and additional metadata) from mobile devices 105. Further, the ETL application 112 cleans, stores, and indexes the image metadata 114 for use by the application server 115. Once processed, the ETL application 112 may store the image metadata 114 in a data store (e.g., such as in a database) for access by the application server 115. In one embodiment, the ETL server 110 may be a physical computing system or a virtual machine computing instance in the cloud. Although depicted as a single server, the application server 110 may comprise multiple servers configured as a cluster (e.g., via the Apache Spark framework on top of Hadoop-based storage). This architecture allows the application servers 110 to process large amounts of images and image metadata sent from mobile applications 106.
  • In one embodiment, an application service 116 communicates with the mobile application 106. In one embodiment, the application server 115 may be a physical computing system or a virtual machine computing instance in the cloud. Although depicted as a single server, the application server 115 may comprise multiple servers configured as a cluster (e.g., via the Apache Spark framework on top of a Hadoop-based storage architecture). This architecture allows the application servers 115 to process large amounts of images and image metadata sent from mobile applications 106.
  • As shown, the application server 115 includes an analysis tool 117, a knowledge graph 118, and a user interest taxonomy 119. In one embodiment, the analysis tool 117 generates the user interest taxonomy 119 based on image metadata 114 from image collections of multiple users. As described below, the user interest taxonomy 119 represents interests inferred from image attributes identified from the knowledge graph 118.
  • In one embodiment, the knowledge graph 118 includes a collection of attributes which may be imputed to an image. Examples of attributes include time and location information, event information, genres, price ranges, weather, subject matter, and the like. The analysis tool 117 builds the knowledge graph 118 using weather data, location data, events data, encyclopedia data, and the like from a variety of data sources.
  • In one embodiment, the analysis tool 117 imputes attributes from the knowledge graph 118 to the images 108 based on the metadata 114. That is, the analysis tool 117 may correlate time and location information in image metadata 114 to attributes in the knowledge graph 118. For example, assume that a user captures an image 108 of a baseball game. Metadata 114 for that image 108 may include a GPS, a date, and a time when the image 108 was captured. The analysis tool 117 can correlate this information to attributes such as weather conditions at that time and location (e.g., “sunny”), an event name (e.g., “Dodgers Game”), teams playing at that game (e.g., “Dodgers” and “Cardinals”), etc. The analysis tool 117 associates the imputed attributes with the user who took the image. As noted, e.g., a row in a user attribute matrix may be updated to reflect the imputed attributes of each new image taken by that user. Further, the analysis tool 117 may perform machine learning techniques, such as latent Dirichlet analysis (LDA), to decompose the user-attribute matrix into sub-matrices. Doing so allows the analysis tool 117 to identify concepts, i.e., clusters of attributes.
  • As described further below, the product feed extractor 120 may use the user interest taxonomy 119 to identify commercial products and services of a third party (e.g., a retailer, airlines company, health and fitness organization, etc.) that may be of interest to a user.
  • For example, the product feed extractor 120 may retrieve information from a product feed 127 of a third party system 125. In one embodiment, the product feed 127 is a listing of commercial products or services of a third party, such as those of a retailer. For example, a product feed 127 of a shoe retailer may list items such as dress shoes, casual shoes, sports shoes, etc. Further, each item may contain various information about the item, such as a name of the item, type of the item, price of the item, size information for the item, description of the item, and the like. The product feed extractor 120 may identify, from the product feed 127, one or more attributes describing each product. For example, a product of a shoe retailer may have attributes such as “shoe,” “running,” “menswear,” and so on. The product feed extractor 120 can map the attributes of the product feed 127 with concepts in the interest taxonomy 119. Doing so allows the analysis tool 117 to identify products and services from the feed 127 that align with certain user interests identified in the interest taxonomy. As a result, third parties can target users who may be interested in the identified products and services.
  • FIG. 2 illustrates mobile application 106, according to one embodiment. As shown, mobile application 106 includes a SDK component 200 used to send image and metadata information to the multimedia service platform. The SDK component 200 further includes an extraction component 205, a search and similarity component 210, and a log component 215. In one embodiment, the extraction component 205 extracts metadata (e.g., EXIF metadata, PHAsset metadata, and the like) from images captured using a mobile device 105. Further, the extraction component 205 may perform ETL preprocessing operations on the metadata. For example, the extraction component 205 may format the metadata for the search and similarity component 210 and the log component 215.
  • In one embodiment, the search and similarity component 210 infers additional metadata from an image based on the metadata (e.g., spatiotemporal metadata) retrieved by the extraction component 205. Examples of additional metadata include whether a given image was captured at daytime or nighttime, whether the image was captured indoors or outdoors, whether the image was edited, weather conditions when the image was captured, etc. Further, the search and similarity component 210 generates a two-dimensional image feature map from a collection of images captured on a given mobile device 105, where each row represents an image and columns represent metadata attributes. Cells of the map indicate whether an image has a particular attribute. The image feature map allows the search and similarity component 210 to provide analytics and search features for the collection of images captured by a mobile device. For example, a user of the mobile application 106 may search for images on their mobile device which have a given attribute, such as images taken during daytime or taken from a particular location. In turn, the search and similarity component 210 may evaluate the image map to identify photos having such an attribute.
  • In one embodiment, the log component 215 evaluates the image metadata. For example, the log component 215 records metadata sent to the ETL server 110. Once received, the application 112 performs ETL operations, e.g., loading the metadata into a data store (such as a database). The metadata is accessible by the analysis tool 117.
  • FIG. 3 further illustrates the analysis tool 117, according to one embodiment. As shown, the analysis tool 117 includes an aggregation component 305, a knowledge graph component 310, a taxonomy component 320, and a user interest inference component 325.
  • In one embodiment, the aggregation component 305 receives streams of image metadata corresponding to images captured by users of application 106 by users from the ETL server 110. Once received, the aggregation component 305 organizes images and metadata by user. The metadata may include both raw image metadata (e.g., time and GPS information) and inferred metadata (e.g., daytime or nighttime image, indoor or outdoor image, “selfie” image, etc.). To organize metadata by user, the aggregation component 305 evaluates log data from the ETL server 110 to identify image metadata from different devices (and presumably different users) and metadata type (e.g., whether the metadata corresponds to image metadata or application usage data).
  • In one embodiment, the knowledge graph component 310 (and later maintains) the knowledge graph 118 using any suitable data source, such as local news and media websites, online event schedules for performance venues, calendars published by schools, government, or private enterprises, online schedules and ticket sales. The knowledge graph component 310 determines a set of attributes related to each event to store in the knowledge graph 118.
  • In one embodiment, to impute attributes from the knowledge graph 118 to a given image, the knowledge graph component 310 evaluates time and location metadata of the image against the knowledge graph 118. The knowledge graph component 310 determines whether the image metadata matches a location and/or event in the knowledge graph. The information may be matched using a specified spatiotemporal range, e.g., within a time period of the event, within a set of GPS coordinate range, etc. In one embodiment, the knowledge graph component 310 may further match the information based on a similarity of metadata of other user photos that have been matched to that event.
  • In one embodiment, the taxonomy component 320 evaluates the user-attribute matrix to determine concepts associated with a given user. As stated, a concept is a cluster of related attributes. The interest taxonomy generation component 320 may perform machine learning techniques, such as Latent Dirichlet Analysis (LDA), Non-Negative Matrix Factorization (NNMF), Deep Learning algorithms, and the like, to decompose the user-attribute matrix into sub-matrices. The taxonomy component 320 evaluates the sub-matrices to identify latent concepts from co-occurring attributes.
  • Further, the taxonomy component 320 may determine a score distribution for each attribute over each concept. The taxonomy component 320 may populate a concept-attribute matrix, where the concepts are rows and attributes are columns. Each cell value is the membership score of the respective attribute to the respective concept. The taxonomy component 320 may perform further machine learning techniques (e.g., LDA, NNMF, Deep Learning algorithms, etc.) to identify relationships and hierarchies between each concepts.
  • In one embodiment, the interest inference component 325 builds a learning model based on the identified concepts and the users. To do so, the interest inference component 325 may train Support Vector Machine (SVM) classifiers for each concept to determine user association in one or more concepts. Doing so results in each user in the platform being assigned an interest score per concept.
  • Once trained, the interest inference component 325 may predict user interests using the learning model. As the multimedia service platform receives image metadata from new users, the interest inference component 325 can assign the new users with scores for each concept based on the metadata and the learning model. A user having a high membership score in a given concept may indicate a high degree of interest for that concept. The interest inference component 325 may build a user-concept matrix, where rows represent users and columns represent concepts. A cell in the matrix represent a score for a given user-concept combination.
  • FIG. 4 further illustrates the product feed extractor 120, according to one embodiment. As shown, the product feed extractor 120 includes a retrieval component 405, an evaluation component 410, a mapping component 415, and an identification component 420.
  • In one embodiment, the retrieval component 405 extracts a product feed from a system of a third party organization, such as a website of a retailer, fitness organization, or travel company. An example of the product feed is a product inventory provided on a website of a sports clothing retailer. As stated, a product feed is a listing of commercial products or services provided by the organization. Continuing the previous example, the product feed of a sports clothing retailer includes items such as running shoes, basketball shorts, baseball caps, etc. Further, each item in the product feed may include information associated with the item, such as a name of the item, a price of the item, an average rating of the item by consumers, price, a type of the item, a description of the item, and the like.
  • In one embodiment, the transformation component 410 determines one or more attributes of each item to associate with the item. Continuing the previous example of a sports clothing retailer, attributes of a given item may include “shoes,” “black,” “running,” “menswear,” and so on. The transformation component 410 may perform NLP techniques such as tokenization, lexical analysis, semantic analysis, and pattern matching, to identify attributes. In one embodiment, the transformation component 410 builds an item-attribute matrix, where rows represent evaluated items and columns represent attributes. If a given item is associated with a given attribute, the transformation component 410 flags the corresponding cell value as 1 (and 0 if the attribute is not present).
  • In one embodiment, the mapping component 415 associates item attributes to concepts of the interest taxonomy. For example, the mapping component 415 may perform NLP and Machine Learning techniques to determine word space model distance of a given item attribute from a concept. The mapping component 415 can determine a score based on such distances. The mapping component 415 may associate an attribute having score that exceeds a given threshold for a given concept with that concept. The mapping component 415 may build an item-concept matrix, where rows represent items and columns represent concepts. Cells represent a concept score for a given item-concept combination.
  • In one embodiment, the identification component 420 determines one or more products that may be of interest to a given user. To do so, the identification component 420 may evaluate a dot product between a user vector in the user-concept matrix and the product-concept matrix. The identification component 420 may determine that products exceeding a threshold score for that user indicates that a user may have interest in that product. A third party may use such information to target specific recommendations for that product to the user.
  • FIG. 5 illustrates a method 500 for building an interest taxonomy across a userbase, according to one embodiment. Method 500 begins at step 505, where the aggregation component 305 segments images by users. Doing so allows the analysis tool 107 to evaluate collections of image metadata for each user individually.
  • At step 510, the knowledge graph component 310 imputes attributes from the knowledge graph 118 onto the images based on the image metadata. To do so, the graph component 310 correlates time and location metadata of a given image to information provided in the knowledge graph, such as events, that coincide with the time and location metadata (with a degree of allowance). As a result, each image is associated with a set of attributes.
  • At step 515, the knowledge graph component 310 builds a user-attribute matrix based on the imputed attributes to the images. The knowledge graph component 310 further imputes attributes associated with each image to the respective user. Each cell in the user-attribute matrix is an incremental value that represents a count of images in which the corresponding attribute is present.
  • At step 520, the interest taxonomy generation component 320 decomposes the user-attribute matrix to identify concepts from the attributes. As stated, a concept may include one or more attributes. The interest taxonomy generation component 320 may evaluate the attributes using machine learning techniques to identify the concepts. Further, the interest taxonomy generation component 320 may generate an attribute-concept matrix, where the cell values represent membership scores of each attribute to a given concept. Attributes having a qualifying score may be associated with the concept.
  • FIG. 6 illustrates a method 600 for inferring user interests from concepts derived based on image metadata, according to one embodiment. Method 600 begins at step 605, where the analysis tool 117 determines, for each user, an interest score for each concept relative to other concepts. To do so, the analysis tool 117 may calculate a dot product between a user vector in the user-attribute matrix to the attribute-concept matrix.
  • At step 610, the analysis tool 117 assigns each user to one or more concepts based on the interest scores. To do so, the analysis tool 117 may determine whether a given interest score exceeds a threshold for that concept. And if so, the analysis tool 117 associates the user with that concept.
  • At step 615, the analysis tool 117 trains multiple one-versus-all predictive models for inferring user interests. The analysis tool 117 may use associations between a user and a concept as positive examples for association to that concept. The analysis tool 117 may also use lack of associations between a user and a concept as negative examples.
  • FIG. 7 illustrates a method 700 for recommending products based on inferred interests derived from image metadata, according to one embodiment. At step 705, the retrieval component 405 extracts a product feed of a third party system. For example, assume the retrieval component 405 extracts a product feed from a website of sports clothing retailer. The product feed includes items such as running shoes, basketball shorts, baseball caps, etc. Further, each item in the product feed includes information associated with the item (e.g., a name of the item, a price of the item, an average rating of the item by consumers, price, a type of the item, a description of the item, etc.).
  • At step 710, the transformation component 410 determines a set of attributes for each item. The transformation component 410 performs NLP techniques over the raw text associated with a given item, such as tokenization, lexical and semantic analysis, pattern matching, and so on. Doing so results in a set of attributes for each item (e.g., “outerwear,” “shoes,” “Mercury 7,” “menswear,” “running,” etc.). Further, the transformation component 410 builds an item-attribute matrix, where rows represent evaluated items and columns represent attributes. As stated, if a given item is associated with a given attribute, the transformation component 410 flags the corresponding cell value as 1 (and 0 if the attribute is not present).
  • At step 715, the mapping component 415 associates product feed attributes with learned concepts of the interest taxonomy. To do so, the mapping component 415 determines a word space model distance of a given item attribute from a concept. Further, the mapping component 415 determines a score based on such distances. The mapping component 415 associates an attribute having score that exceeds a given threshold for a given concept with that concept. The mapping component 415 populates an item-concept matrix, where rows represent items and columns represent concepts. Cells represent a concept score for a given item-concept combination.
  • At step 720, the identification component 420 determines which users to target for a given product based on the associations. To do so, the identification component 420 evaluates a dot product of a user vector of the user-concept matrix and the product-concept matrix. The identification component 420 may determine that products exceeding a threshold score for that user indicates that a user may have interest in that product. As a result, a third party may use such information to target specific recommendations for that product to the user.
  • FIG. 8 illustrates an application server computing system 800, according to one embodiment. As shown, the computing system 800 includes, without limitation, a central processing unit (CPU) 805, a network interface 815, a memory 820, and storage 830, each connected to a bus 817. The computing system 800 may also include an I/O device interface 810 connecting I/O devices 812 (e.g., keyboard, mouse, and display devices) to the computing system 800. Further, in context of this disclosure, the computing elements shown in computing system 800 may correspond to a physical computing system (e.g., a system in a data center) or may be a virtual computing instance executing within a computing cloud.
  • The CPU 805 retrieves and executes programming instructions stored in the memory 820 as well as stores and retrieves application data residing in the memory 820. The interconnect 817 is used to transmit programming instructions and application data between the CPU 805, I/O devices interface 810, storage 830, network interface 815, and memory 820. Note, CPU 805 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like. And the memory 820 is generally included to be representative of a random access memory. The storage 830 may be a disk drive storage device. Although shown as a single unit, the storage 830 may be a combination of fixed and/or removable storage devices, such as fixed disc drives, removable memory cards, or optical storage, network attached storage (NAS), or a storage area-network (SAN).
  • Illustratively, the memory 820 includes an application service 822, an analysis tool 824, and a product feed extractor 826. The storage 830 includes a knowledge graph 834, and a user interest taxonomy 836. The application service 822 provides access to various services of a multimedia service platform to mobile devices. The analysis tool 824 generates a user interest taxonomy 836 based on metadata of images taken by users.
  • Further, the analysis tool 824 builds the knowledge graph 834 from external data sources. To do so, the analysis tool 824 performs NLP techniques on the raw text obtained from the data sources to identify relevant terms related to events, moments, weather, etc. Further, the analysis tool 824 may impute information from the knowledge graph 834 images submitted to the multimedia service platform. In addition, the analysis tool 824 generates a user interest taxonomy 836 of concepts inferred from the attributes. To do so, the analysis tool 824 may perform machine learning techniques to identify concepts based on co-occurring attributes. In addition, the analysis tool 824 may determine a membership score for each attribute to each identified concept. The analysis tool 824 may associate attributes to a given concept based on the membership score. Further, the analysis tool 824 may identify hierarchical relationships between the concepts through machine learning.
  • Further, the product feed extractor 826 identifies commercial products and services of a third party that may be of interest to a user, based on the user interest taxonomy 836. For example, the product feed extractor 823 may retrieve information from a product feed of a third party system (e.g., of a retailer). The product feed extractor 836 may identify, from the product feed 127, one or more attributes describing each product. The product feed extractor 836 can map the attributes of the product feed with concepts in the interest taxonomy 836. Doing so allows the analysis tool 824 to identify products and services from the feed that align with certain user interests identified in the interest taxonomy. As a result, third parties can target users who may be interested in the identified products and services.
  • While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (21)

What is claimed is:
1. A method for identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files, the method comprising:
extracting a product feed, wherein the product feed lists one or more items;
identifying, for each item in the product feed, one or more attributes describing the item;
mapping each item to concepts of an interest taxonomy based on the identified one or more attributes for the item, wherein one or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files; and
associating each item to one or more of the users based on the mapping.
2. The method of claim 1, wherein the identified attributes include at least one of a name of the item, a price of the item, a description of the item, and a type of the item.
3. The method of claim 1, further comprising:
for each attribute identified for the item, updating an item-attribute matrix to reflect the attribute identified for the item.
4. The method of claim 3, wherein mapping each item to the concepts of the interest taxonomy comprises:
evaluating the item-attribute matrix to determine at least a first concept to associate with the item.
5. The method of claim 4, further comprising:
updating an item-concept matrix to reflect the attribute identified for the first concept.
6. The method of claim 4, wherein the first concept is determined based on a co-occurrence between one or more attributes in the item-attribute matrix.
7. The method of claim 1, wherein each of the digital multimedia files is one of either an image or a video.
8. A non-transitory computer-readable storage medium storing instructions, which, when executed on a processor, performs an operation for identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files, the operation comprising:
extracting a product feed, wherein the product feed lists one or more items;
identifying, for each item in the product feed, one or more attributes describing the item;
mapping each item to concepts of an interest taxonomy based on the identified one or more attributes for the item, wherein one or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files; and
associating each item to one or more of the users based on the mapping.
9. The non-transitory computer-readable storage medium of claim 8, wherein the identified attributes include at least one of a name of the item, a price of the item, a description of the item, and a type of the item.
10. The non-transitory computer-readable storage medium of claim 8, wherein the operation further comprises:
for each attribute identified for the item, updating an item-attribute matrix to reflect the attribute identified for the item.
11. The non-transitory computer-readable storage medium of claim 10, wherein mapping each item to the concepts of the interest taxonomy comprises:
evaluating the item-attribute matrix to determine at least a first concept to associate with the item.
12. The non-transitory computer-readable storage medium of claim 11, wherein the operation further comprises:
updating an item-concept matrix to reflect the attribute identified for the first concept.
13. The non-transitory computer-readable storage medium of claim 11, wherein the first concept is determined based on a co-occurrence between one or more attributes in the item-attribute matrix.
14. The non-transitory computer-readable storage medium of claim 8, wherein each of the digital multimedia files is one of either an image or a video.
15. A system, comprising:
a processor; and
a memory storing one or more application programs configured to perform an operation for identifying one or more products to recommend to a plurality of users based on metadata of digital multimedia files, the operation comprising:
extracting a product feed, wherein the product feed lists one or more items,
identifying, for each item in the product feed, one or more attributes describing the item,
mapping each item to concepts of an interest taxonomy based on the identified one or more attributes for the item, wherein one or more users are associated with each concept in the interest taxonomy based on the metadata of the digital multimedia files, and
associating each item to one or more of the users based on the mapping.
16. The system of claim 15, wherein the identified attributes include at least one of a name of the item, a price of the item, a description of the item, and a type of the item.
17. The system of claim 15, wherein the operation further comprises:
for each attribute identified for the item, updating an item-attribute matrix to reflect the attribute identified for the item.
18. The system of claim 17, wherein mapping each item to the concepts of the interest taxonomy comprises:
evaluating the item-attribute matrix to determine at least a first concept to associate with the item.
19. The system of claim 18, wherein the operation further comprises:
updating an item-concept matrix to reflect the attribute identified for the first concept.
20. The system of claim 18, wherein the first concept is determined based on a co-occurrence between one or more attributes in the item-attribute matrix.
21. The system of claim 15, wherein each of the digital multimedia files is one of either an image or a video.
US14/627,264 2014-12-17 2015-02-20 Method for recommending products based on a user profile derived from metadata of multimedia content Abandoned US20160180402A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/627,264 US20160180402A1 (en) 2014-12-17 2015-02-20 Method for recommending products based on a user profile derived from metadata of multimedia content

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462093372P 2014-12-17 2014-12-17
US14/627,264 US20160180402A1 (en) 2014-12-17 2015-02-20 Method for recommending products based on a user profile derived from metadata of multimedia content

Publications (1)

Publication Number Publication Date
US20160180402A1 true US20160180402A1 (en) 2016-06-23

Family

ID=56129838

Family Applications (6)

Application Number Title Priority Date Filing Date
US14/616,197 Abandoned US20160203137A1 (en) 2014-12-17 2015-02-06 Imputing knowledge graph attributes to digital multimedia based on image and video metadata
US14/618,859 Active 2036-02-28 US9805098B2 (en) 2014-12-17 2015-02-10 Method for learning a latent interest taxonomy from multimedia metadata
US14/627,064 Active 2036-03-18 US9798980B2 (en) 2014-12-17 2015-02-20 Method for inferring latent user interests based on image metadata
US14/627,264 Abandoned US20160180402A1 (en) 2014-12-17 2015-02-20 Method for recommending products based on a user profile derived from metadata of multimedia content
US15/792,308 Abandoned US20180082200A1 (en) 2014-12-17 2017-10-24 Method for inferring latent user interests based on image metadata
US15/798,700 Abandoned US20180052855A1 (en) 2014-12-17 2017-10-31 Method for learning a latent interest taxonomy from multimedia metadata

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US14/616,197 Abandoned US20160203137A1 (en) 2014-12-17 2015-02-06 Imputing knowledge graph attributes to digital multimedia based on image and video metadata
US14/618,859 Active 2036-02-28 US9805098B2 (en) 2014-12-17 2015-02-10 Method for learning a latent interest taxonomy from multimedia metadata
US14/627,064 Active 2036-03-18 US9798980B2 (en) 2014-12-17 2015-02-20 Method for inferring latent user interests based on image metadata

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/792,308 Abandoned US20180082200A1 (en) 2014-12-17 2017-10-24 Method for inferring latent user interests based on image metadata
US15/798,700 Abandoned US20180052855A1 (en) 2014-12-17 2017-10-31 Method for learning a latent interest taxonomy from multimedia metadata

Country Status (1)

Country Link
US (6) US20160203137A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180235A1 (en) * 2014-12-17 2016-06-23 InSnap, Inc. Method for inferring latent user interests based on image metadata
CN106597439A (en) * 2016-12-12 2017-04-26 电子科技大学 Synthetic aperture radar target identification method based on incremental learning
US20170132509A1 (en) * 2015-11-06 2017-05-11 Adobe Systems Incorporated Item recommendations via deep collaborative filtering
US20180032882A1 (en) * 2016-07-27 2018-02-01 Fuji Xerox Co., Ltd. Method and system for generating recommendations based on visual data and associated tags
US20190138943A1 (en) * 2017-11-08 2019-05-09 International Business Machines Corporation Cognitive visual conversation
US20190163829A1 (en) * 2017-11-27 2019-05-30 Adobe Inc. Collaborative-Filtered Content Recommendations With Justification in Real-Time
US20190311418A1 (en) * 2018-04-10 2019-10-10 International Business Machines Corporation Trend identification and modification recommendations based on influencer media content analysis
US10891653B1 (en) * 2015-03-13 2021-01-12 A9.Com, Inc. Approaches for retrieval of electronic advertisements
US11281734B2 (en) 2019-07-03 2022-03-22 International Business Machines Corporation Personalized recommender with limited data availability
US11341207B2 (en) 2018-12-10 2022-05-24 Ebay Inc. Generating app or web pages via extracting interest from images
US11373228B2 (en) 2019-01-31 2022-06-28 Walmart Apollo, Llc System and method for determining substitutes for a requested product
US11373231B2 (en) 2019-01-31 2022-06-28 Walmart Apollo, Llc System and method for determining substitutes for a requested product and the order to provide the substitutes
JP2022121602A (en) * 2019-06-28 2022-08-19 富士フイルム株式会社 Information processing apparatus, method and program

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331733B2 (en) * 2013-04-25 2019-06-25 Google Llc System and method for presenting condition-specific geographic imagery
CN106227820B (en) * 2016-07-22 2019-08-06 北京科技大学 A kind of construction method of Basic Theories of Chinese Medicine knowledge picture library
US10083162B2 (en) 2016-11-28 2018-09-25 Microsoft Technology Licensing, Llc Constructing a narrative based on a collection of images
CN106934032B (en) * 2017-03-14 2019-10-18 北京软通智城科技有限公司 A kind of city knowledge mapping construction method and device
CN109558018B (en) * 2017-09-27 2022-05-17 腾讯科技(深圳)有限公司 Content display method and device and storage medium
US10884769B2 (en) * 2018-02-17 2021-01-05 Adobe Inc. Photo-editing application recommendations
US11036811B2 (en) 2018-03-16 2021-06-15 Adobe Inc. Categorical data transformation and clustering for machine learning using data repository systems
WO2019213425A2 (en) * 2018-05-02 2019-11-07 Visa International Service Association System and method including accurate scoring and response
US10897442B2 (en) * 2018-05-18 2021-01-19 International Business Machines Corporation Social media integration for events
US10902573B2 (en) 2018-05-31 2021-01-26 International Business Machines Corporation Cognitive validation of date/time information corresponding to a photo based on weather information
US10339420B1 (en) * 2018-08-30 2019-07-02 Accenture Global Solutions Limited Entity recognition using multiple data streams to supplement missing information associated with an entity
CN109840268A (en) * 2018-12-23 2019-06-04 国网浙江省电力有限公司 A kind of universe data map construction method based on enterprise information model
US11403328B2 (en) 2019-03-08 2022-08-02 International Business Machines Corporation Linking and processing different knowledge graphs
JP7129383B2 (en) * 2019-07-03 2022-09-01 富士フイルム株式会社 Image processing device, image processing method, image processing program, and recording medium storing the program
CN114846501A (en) * 2020-01-22 2022-08-02 雀巢产品有限公司 Social media influencer platform
CN111931069B (en) * 2020-09-25 2021-01-22 浙江口碑网络技术有限公司 User interest determination method and device and computer equipment
CN113656589B (en) * 2021-04-19 2023-07-04 腾讯科技(深圳)有限公司 Object attribute determining method, device, computer equipment and storage medium
CN113297392B (en) * 2021-06-02 2022-02-18 江苏数兑科技有限公司 Intelligent data service method based on knowledge graph
EP4372582A1 (en) * 2021-08-27 2024-05-22 Siemens Aktiengesellschaft Knowledge graph generation method and apparatus and computer readable medium

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6411724B1 (en) * 1999-07-02 2002-06-25 Koninklijke Philips Electronics N.V. Using meta-descriptors to represent multimedia information
US7275063B2 (en) * 2002-07-16 2007-09-25 Horn Bruce L Computer system for automatic organization, indexing and viewing of information from multiple sources
US7548929B2 (en) * 2005-07-29 2009-06-16 Yahoo! Inc. System and method for determining semantically related terms
US7822746B2 (en) * 2005-11-18 2010-10-26 Qurio Holdings, Inc. System and method for tagging images based on positional information
US8145677B2 (en) * 2007-03-27 2012-03-27 Faleh Jassem Al-Shameri Automated generation of metadata for mining image and text data
US8055708B2 (en) * 2007-06-01 2011-11-08 Microsoft Corporation Multimedia spaces
US8611677B2 (en) * 2008-11-19 2013-12-17 Intellectual Ventures Fund 83 Llc Method for event-based semantic classification
US8458115B2 (en) * 2010-06-08 2013-06-04 Microsoft Corporation Mining topic-related aspects from user generated content
US8832080B2 (en) * 2011-05-25 2014-09-09 Hewlett-Packard Development Company, L.P. System and method for determining dynamic relations from images
JP5830784B2 (en) * 2011-06-23 2015-12-09 サイバーアイ・エンタテインメント株式会社 Interest graph collection system by relevance search with image recognition system
US20130085858A1 (en) * 2011-10-04 2013-04-04 Richard Bill Sim Targeting advertisements based on user interactions
US8869235B2 (en) * 2011-10-11 2014-10-21 Citrix Systems, Inc. Secure mobile browser for protecting enterprise data
US20150242689A1 (en) * 2012-08-06 2015-08-27 See-Out Pty, Ltd System and method for determining graph relationships using images
US9454530B2 (en) * 2012-10-04 2016-09-27 Netflix, Inc. Relationship-based search and recommendations
US9218439B2 (en) * 2013-06-04 2015-12-22 Battelle Memorial Institute Search systems and computer-implemented search methods
US20140372102A1 (en) * 2013-06-18 2014-12-18 Xerox Corporation Combining temporal processing and textual entailment to detect temporally anchored events
US9542422B2 (en) * 2013-08-02 2017-01-10 Shoto, Inc. Discovery and sharing of photos between devices
US20150095303A1 (en) * 2013-09-27 2015-04-02 Futurewei Technologies, Inc. Knowledge Graph Generator Enabled by Diagonal Search
US9864758B2 (en) * 2013-12-12 2018-01-09 Nant Holdings Ip, Llc Image recognition verification
US10289637B2 (en) * 2014-06-13 2019-05-14 Excalibur Ip, Llc Entity generation using queries
US10074013B2 (en) * 2014-07-23 2018-09-11 Gopro, Inc. Scene and activity identification in video summary generation
US10474949B2 (en) * 2014-08-19 2019-11-12 Qualcomm Incorporated Knowledge-graph biased classification for data
US9652685B2 (en) * 2014-09-30 2017-05-16 Disney Enterprises, Inc. Generating story graphs with large collections of online images
US9412043B2 (en) * 2014-10-03 2016-08-09 EyeEm Mobile GmbH Systems, methods, and computer program products for searching and sorting images by aesthetic quality
US9754188B2 (en) * 2014-10-23 2017-09-05 Microsoft Technology Licensing, Llc Tagging personal photos with deep networks
US20160203137A1 (en) * 2014-12-17 2016-07-14 InSnap, Inc. Imputing knowledge graph attributes to digital multimedia based on image and video metadata
US9501466B1 (en) * 2015-06-03 2016-11-22 Workday, Inc. Address parsing system

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160180235A1 (en) * 2014-12-17 2016-06-23 InSnap, Inc. Method for inferring latent user interests based on image metadata
US9798980B2 (en) * 2014-12-17 2017-10-24 The Honest Company, Inc. Method for inferring latent user interests based on image metadata
US10891653B1 (en) * 2015-03-13 2021-01-12 A9.Com, Inc. Approaches for retrieval of electronic advertisements
US20170132509A1 (en) * 2015-11-06 2017-05-11 Adobe Systems Incorporated Item recommendations via deep collaborative filtering
US10255628B2 (en) * 2015-11-06 2019-04-09 Adobe Inc. Item recommendations via deep collaborative filtering
US20180032882A1 (en) * 2016-07-27 2018-02-01 Fuji Xerox Co., Ltd. Method and system for generating recommendations based on visual data and associated tags
CN106597439A (en) * 2016-12-12 2017-04-26 电子科技大学 Synthetic aperture radar target identification method based on incremental learning
US20190138943A1 (en) * 2017-11-08 2019-05-09 International Business Machines Corporation Cognitive visual conversation
US10762153B2 (en) * 2017-11-27 2020-09-01 Adobe Inc. Collaborative-filtered content recommendations with justification in real-time
US20190163829A1 (en) * 2017-11-27 2019-05-30 Adobe Inc. Collaborative-Filtered Content Recommendations With Justification in Real-Time
US11544336B2 (en) 2017-11-27 2023-01-03 Adobe Inc. Collaborative-filtered content recommendations with justification in real-time
US20190311416A1 (en) * 2018-04-10 2019-10-10 International Business Machines Corporation Trend identification and modification recommendations based on influencer media content analysis
US20190311418A1 (en) * 2018-04-10 2019-10-10 International Business Machines Corporation Trend identification and modification recommendations based on influencer media content analysis
US11341207B2 (en) 2018-12-10 2022-05-24 Ebay Inc. Generating app or web pages via extracting interest from images
US11907322B2 (en) 2018-12-10 2024-02-20 Ebay Inc. Generating app or web pages via extracting interest from images
US11373228B2 (en) 2019-01-31 2022-06-28 Walmart Apollo, Llc System and method for determining substitutes for a requested product
US11373231B2 (en) 2019-01-31 2022-06-28 Walmart Apollo, Llc System and method for determining substitutes for a requested product and the order to provide the substitutes
JP2022121602A (en) * 2019-06-28 2022-08-19 富士フイルム株式会社 Information processing apparatus, method and program
JP7430222B2 (en) 2019-06-28 2024-02-09 富士フイルム株式会社 Information processing device, method, and program
US11281734B2 (en) 2019-07-03 2022-03-22 International Business Machines Corporation Personalized recommender with limited data availability

Also Published As

Publication number Publication date
US9805098B2 (en) 2017-10-31
US20160203137A1 (en) 2016-07-14
US20180082200A1 (en) 2018-03-22
US20160180235A1 (en) 2016-06-23
US20180052855A1 (en) 2018-02-22
US20160203141A1 (en) 2016-07-14
US9798980B2 (en) 2017-10-24

Similar Documents

Publication Publication Date Title
US20160180402A1 (en) Method for recommending products based on a user profile derived from metadata of multimedia content
US10769444B2 (en) Object detection from visual search queries
US11734725B2 (en) Information sending method, apparatus and system, and computer-readable storage medium
US9367603B2 (en) Systems and methods for behavioral segmentation of users in a social data network
CN107667389B (en) System, method and apparatus for identifying targeted advertisements
WO2017181612A1 (en) Personalized video recommendation method and device
US20140280549A1 (en) Method and System for Efficient Matching of User Profiles with Audience Segments
US20140067535A1 (en) Concept-level User Intent Profile Extraction and Applications
US10699320B2 (en) Marketplace feed ranking on online social networks
WO2017190610A1 (en) Target user orientation method and device, and computer storage medium
CN108805598B (en) Similarity information determination method, server and computer-readable storage medium
US20100030648A1 (en) Social media driven advertisement targeting
US20140358630A1 (en) Apparatus and process for conducting social media analytics
WO2012031239A2 (en) User interest analysis systems and methods
TW201447797A (en) Method and system for multi-phase ranking for content personalization
WO2013010104A1 (en) Topic and time based media affinity estimation
US10210429B2 (en) Image based prediction of user demographics
CN108959323B (en) Video classification method and device
US10296540B1 (en) Determine image relevance using historical action data
US9449231B2 (en) Computerized systems and methods for generating models for identifying thumbnail images to promote videos
CN103718178A (en) Utilization of features extracted from structured documents to improve search relevance
US20200111121A1 (en) Systems and methods for automatic processing of marketing documents
CN102722832A (en) Online video advertisement refinement targeting delivery method
US20190050890A1 (en) Video dotting placement analysis system, analysis method and storage medium
Jayarajah et al. Can instagram posts help characterize urban micro-events?

Legal Events

Date Code Title Description
AS Assignment

Owner name: INSNAP, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SABAH, MOHAMMAD;SADREDDIN, MOHAMMAD IMAN;ABDULLAH, SHAFAQ;REEL/FRAME:034993/0685

Effective date: 20150218

AS Assignment

Owner name: THE HONEST COMPANY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INSNAP INC.;REEL/FRAME:039988/0317

Effective date: 20160203

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., CALIFORNIA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:THE HONEST COMPANY, INC.;REEL/FRAME:043017/0441

Effective date: 20170525

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION