WO2016064679A1 - Extracting product purchase information from electronic messages - Google Patents

Extracting product purchase information from electronic messages Download PDF

Info

Publication number
WO2016064679A1
WO2016064679A1 PCT/US2015/056013 US2015056013W WO2016064679A1 WO 2016064679 A1 WO2016064679 A1 WO 2016064679A1 US 2015056013 W US2015056013 W US 2015056013W WO 2016064679 A1 WO2016064679 A1 WO 2016064679A1
Authority
WO
WIPO (PCT)
Prior art keywords
tokens
product purchase
electronic messages
cluster
electronic message
Prior art date
Application number
PCT/US2015/056013
Other languages
French (fr)
Inventor
Ievgen Mastierov
Conal Sathi
Original Assignee
Slice Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/519,975 external-priority patent/US9875486B2/en
Priority claimed from US14/519,919 external-priority patent/US9563904B2/en
Application filed by Slice Technologies, Inc. filed Critical Slice Technologies, Inc.
Publication of WO2016064679A1 publication Critical patent/WO2016064679A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions

Definitions

  • FIG. 1 is a diagrammatic view of an example of a network
  • FIG. 2 is a diagrammatic view of electronic message processing stages performed by an example of a product purchase information provider.
  • FIG. 3A is a diagrammatic view of an example of an electronic message.
  • FIG. 3B is a diagrammatic view of the electronic message of FIG. 3A showing data fields that have been identified according to an example of a product purchase information extraction process.
  • FIG. 4 is a flow diagram of an example of a process of training a structure learning parser for labeling data fields of an electronic message.
  • FIG. 5 is a flow diagram of an example of the structure learning parser training process of FIG. 4.
  • FIG. 6A is a diagrammatic view of an example of a set of electronic messages.
  • FIG. 6B is a diagrammatic view of the set of electronic messages of FIG. 6A after being pre-processed.
  • FIG. 6C is a diagrammatic view of a generalized suffix tree
  • FIG. 6D is a diagrammatic view of a grammar extracted from the generalized suffix tree representation of FIG. 6C.
  • FIG. 7 is a flow diagram of an example of a process of extracting product purchase information from electronic messages.
  • FIG. 8 is a flow diagram of an example of the product purchase information extraction process of FIG. 7.
  • FIG. 9A is a diagrammatic view of an example of an electronic message.
  • FIG. 9B is a diagrammatic view of an example of the electronic message of FIG. 9A after being pre-processed.
  • FIG. 9C is a diagrammatic view of a grammar matched to the pre- processed electronic message of FIG. 9B.
  • FIG. 9D is a diagrammatic view of a syntax tree parsed from the pre- processed electronic message of FIG. 9B according to the grammar of FIG. 9C.
  • FIG. 9E is a diagrammatic view of an example of a visualization of the electronic message of FIG. 9A showing data fields that are identified in the syntax tree shown in FIG. 9D.
  • FIG. 10 is a diagrammatic view of an example of a graphical user interface presenting aggregated product purchase information.
  • FIG. 1 1 is a block diagram of an example of computer apparatus.
  • a "product” is any tangible or intangible good or service that is available for purchase or use.
  • Product purchase information is information related to the purchase of a product.
  • Product purchase information includes, for example, purchase
  • confirmations e.g., receipts
  • product order information e.g., merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, and order total
  • product shipping information e.g., billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number.
  • An "electronic message” is a persistent text based information record sent from a sender to a recipient between physical network nodes and stored in non- transitory computer-readable memory.
  • An electronic message may be structured message (e.g., a hypertext markup language (HTML) message that includes structured tag elements) or unstructured (e.g., a plain text message).
  • structured message e.g., a hypertext markup language (HTML) message that includes structured tag elements
  • unstructured e.g., a plain text message
  • a "computer” is any machine, device, or apparatus that processes data according to computer-readable instructions that are stored on a computer-readable medium either temporarily or permanently.
  • a "computer operating system” is a software component of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources.
  • a "software application” (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks.
  • a "data file” is a block of information that durably stores data for use by a software application.
  • computer-readable medium also referred to as “memory” refers to any tangible, non-transitory device capable storing information (e.g.,
  • Storage devices suitable for tangibly embodying such information include, but are not limited to, all forms of physical, non-transitory computer-readable memory, including, for example, semiconductor memory devices, such as random access memory (RAM), EPROM, EEPROM, and Flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • semiconductor memory devices such as random access memory (RAM), EPROM, EEPROM, and Flash memory devices
  • magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
  • a “network node” (also referred to simply as a “node”) is a physical junction or connection point in a communications network. Examples of network nodes include, but are not limited to, a terminal, a computer, and a network switch.
  • a “server node” is a network node that responds to requests for information or service.
  • a “client node” is a network node that requests information or service from a server node.
  • the term “includes” means includes but not limited to, the term “including” means including but not limited to.
  • the term “based on” means based at least in part on.
  • the examples that are described herein provide improved systems and methods for extracting product purchase information from electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients by solving practical problems that have arisen as a result of the proliferation of different electronic message formats used by individual merchants and across different merchants and different languages.
  • These examples provide a product purchase information extraction service that is able to extract product purchase information from electronic messages with high precision across a wide variety of electronic message formats.
  • these examples are able to automatically learn the structures and semantics of different message formats, which accelerates the ability to support new message sources, new markets, and different languages.
  • improved systems and methods can be deployed to monitor consumer purchases over time to obtain updated purchase history information that can be aggregated for an individual consumer or across many consumers to provide actionable information that directs consumer behavior and organizational marketing strategies. For example, these improved systems and methods can organize disparate product purchase information extracted from individual electronic messages into actionable data that can be used by a consumer to organize her prior purchases and enhance her understanding of her purchasing behavior and can be used by merchants and other organizations to improve the accuracy and return-on-investment of their marketing campaigns.
  • these systems and methods include improved special purpose computer apparatus programmed to build a structure learning parser that automatically learns the structure of an electronic message and accurately parses product purchase information from the electronic message.
  • These systems and methods also include improved special purpose computer apparatus programmed to function as a structure learning parser that automatically learns the structure of an electronic message and accurately parses product purchase information from the electronic message.
  • FIG. 1 shows an example of a network communications environment 10 that includes a network 1 1 that interconnects a product purchase information provider 12, one or more product merchants 14 that sell products, one or more product delivery providers 16 that deliver purchased products to purchasers, one or more message providers 18 that provide message handling services, and one or more product purchase information consumers 20 that purchase product purchase
  • the network 1 1 may include any of a local area network (LAN), a metropolitan area network (MAN), and a wide area network (WAN) (e.g., the internet).
  • the network 1 1 typically includes a number of different computing platforms and transport facilities that support the transmission of a wide variety of different media types (e.g., text, voice, audio, and video) between network nodes 14 and the product provider 18.
  • Each of the product purchase information provider 12, the product merchants 14, the product delivery providers 16, the message providers 18, and the product purchase information consumers 20 typically connects to the network 1 1 via a network node (e.g., a client computer or a server computer) that includes a tangible computer-readable memory, a processor, and input/output (I/O) hardware (which may include a display).
  • a network node e.g., a client computer or a server computer
  • I/O input/output
  • One or more of the product merchants 14 typically allow consumers and businesses to directly purchase products over the network 22 using a network enabled software application, such as a web browser.
  • One or more of the of the product merchants 14 also may allow consumers and businesses to purchase products in a physical retail establishment.
  • a product merchant may send a product purchase confirmation electronic message to a messaging address associated with the purchaser.
  • the product purchase confirmation message may include, for example, product order information such as merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, and order total.
  • the product merchant also may arrange to have the product delivered by one of the product delivery providers 16.
  • the product delivery provider 16 may deliver the product to the purchaser physically or electronically. In either case, the product delivery provider 16 or the product merchant 14 may send a delivery notification electronic message to the messaging address associated with the purchaser.
  • the delivery notification electronic message may include, for example, product shipping information such as product order information, billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number.
  • the purchaser's messaging address may be any type of network address to which electronic messages may be sent.
  • Examples of such messaging addresses include electronic mail (e-mail) addresses, text messaging addresses (e.g., a sender identifier, such as a telephone number or a user identifier for a texting service), a user identifier for a social networking service, and a facsimile telephone number.
  • the product purchase related electronic messages typically are routed to the purchaser through respective ones of the message providers 18 associated with the purchaser's messaging address.
  • the message providers 18 typically store the purchasers' electronic messages in respective message folder data structures in a database.
  • the product purchase information provider 12 extracts product purchase information from the electronic messages of product purchasers. In some examples, the product purchase information provider 12 obtains authorization from the product purchasers to access their respective message folders that are managed by the message providers 18. In other examples, product purchasers allow the product purchase information provider 12 to access their electronic messages that are stored on their local communication devices (e.g., personal computer or mobile phone).
  • the product purchase information provider 12 obtains authorization from the product purchasers to access their respective message folders that are managed by the message providers 18. In other examples, product purchasers allow the product purchase information provider 12 to access their electronic messages that are stored on their local communication devices (e.g., personal computer or mobile phone).
  • the product purchase information provider 12 processes the electronic messages 22 through a number of stages before producing processed data 24 that is provided to the product purchase information consumers 20. These stages include a message discovery stage 26, a field extraction stage 28, and a data processing stage 30.
  • the product purchase information provider 12 identifies the ones of the electronic messages 22 that relate to product purchases.
  • rule-based filters and machine learning classifiers are used to identify product purchase related electronic messages.
  • the product purchase information provider 12 extracts product purchase information from the identified ones of the electronic messages 22.
  • product purchase information include merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, order total, billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number.
  • the product purchase information provider 12 processes the extracted product purchase information for according to the different types of product purchase information consumers. For example, for individual users, the extracted product purchase information is processed, for example, to display information about the users' purchases, including information for tracking in-transit orders, information for accessing purchase details, and aggregate purchase summary information. For advertisers, the extracted product purchase information is processed, for example, to assist in targeting advertising to consumers based on their purchase histories. For market analysts, the extracted product purchase information is processed to provide, for example, anonymous item-level purchase detail across retailers, categories, and devices.
  • the product purchase information provider 12 includes a structure learning parser that extracts product purchase information from an electronic message using a grammar based parsing approach to identify structural elements and data fields in the electronic message and a machine learning approach to classify the data fields.
  • the structural elements correspond to static, optional, and iterating elements that commonly appear in a particular type of product purchase related electronic message, whereas the data fields contain the variable information at least some of which corresponds to the product purchase information that is extracted.
  • FIG. 3A shows an example of an electronic message 32 for a product order
  • FIG. 3B shows the electronic message 32 with data fields (marked with bold font) that have been identified according to an example of a product purchase
  • the structural elements are: an introductory "Dear” 36; standard informational text 36 (i.e., "Thank you for placing your order ... once your item has been shipped.”); "Order Number:” 38; “Order Summary” 40; “Product Subtotal:” 42; “Discounts:”; “Shipping Charges:” 46; “Tax:” 48; “Total:” 50; "Part No” 52; “Product Price” 54; “Discount” 56; "Part No” 58; "Product Price” 60; and
  • the structural elements 34-50 are static elements and the sets of structural elements 52-56 and 58-62 include the same static elements that repeat in respective iterating elements.
  • the non-structural elements (e.g., prices, order number, and part numbers) of the electronic message are data fields that are extracted and classified by the structure learning parser component of the product purchase
  • FIG. 4 shows an example of a method of building a structure learning parser that extracts product purchase information from an electronic message.
  • computer apparatus is programmed to perform the method of FIG. 4.
  • the computer apparatus groups electronic messages into respective clusters based on similarities between the electronic messages, the electronic messages having been transmitted between physical network nodes to convey product purchase information to designated recipients (FIG. 4, block 70). For each cluster, the computer apparatus extracts a respective grammar defining an arrangement of structural elements of the electronic messages in the cluster (FIG. 4, block 72).
  • the computer apparatus Based on training data that includes fields of electronic messages comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, the computer apparatus builds a classifier that classifies fields of a selected electronic message that includes product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars matched to the selected electronic message (FIG. 4, block 74).
  • the computer apparatus typically stores the grammars and the classifier in non-transitory computer-readable memory in one or more data structures permitting computer-based parsing of product purchase information from electronic messages.
  • a structure learner parser builder includes a product purchase information grammar extractor that performs the grouping and extracting operations of blocks 70-72 of FIG. 4, and a product purchase information token classifier trainer that performs the classifier building operation of block 74 of FIG. 4.
  • the structure learner parser builder is a software application that programs a computer to perform the grouping and extracting operations of blocks 70-72 implements the product purchase information grammar extractor, where a different respective software module includes a respective set of computer-readable instructions for performing the grouping and extracting operations.
  • the product purchase information token classifier trainer is a machine learning training software application that programs a computer to perform the classifier building operation of block 74.
  • FIG. 5 shows a flow diagram of an example of the structure learning parser building process of FIG. 4.
  • the computer apparatus retrieves from a data store (e.g., a database) electronic messages 80 that have been transmitted between physical network nodes to convey product purchase information to designated recipients.
  • FIG. 6A shows an example 81 of one of the electronic messages 80.
  • the computer apparatus pre-processes the electronic messages 80 (FIG. 5, block 82).
  • the computer apparatus tokenizes the text-based contents of the electronic messages by extracting contiguous strings of symbols (e.g., symbols representing alphanumeric characters) separated by white spaces.
  • the contiguous symbol strings typically correspond to words and numbers.
  • the computer apparatus then replaces tokens that match patterns for integers and real numbers (typically prices) in the electronic messages 80 with wildcard tokens.
  • 6B shows an example of a pre-processed version 83 of the electronic message 81 in which integers have been replaced with the wildcard token "INT" and real numbers have been replaced with the wildcard token "FLOAT".
  • the replacement of the variable integer and real number elements of each electronic message with wildcard tokens improves the detection of iterating elements of the electronic messages.
  • the computer apparatus For each of the pre-processed messages 84 (FIG. 5, block 86), the computer apparatus attempts to determine a merchant that is associated with the electronic message (FIG. 5, block 90). For some types of electronic messages, the computer apparatus attempts to determine the merchant from header information that includes supplemental information about the electronic message. For example, an electronic mail (e-mail) message includes header information that indicates the sender, the recipient, and the subject of the electronic mail message, and a text message typically includes a Sender ID that indicates the sender of the message. In some cases, the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message.
  • e-mail electronic mail
  • a text message typically includes a Sender ID that indicates the sender of the message.
  • the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message.
  • the computer apparatus clusters the electronic messages by merchant (FIG. 5, block 92).
  • the computer apparatus sorts the electronic messages into groups by message sender, where each message sender is associated with a respective one of the groups of electronic messages.
  • the computer apparatus clusters the electronic messages within the group into one or more clusters based on similarities between the electronic messages.
  • the result is a respective set 94, 96 of clusters (cluster 1 ,1 ... cluster 1 ,n, cluster k,1 ... cluster k,m) for each merchant, where each cluster consists of electronic messages that are similar to one another.
  • the computer apparatus applies to the electronic messages a clustering process (e.g., a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) process, a k-means clustering process, or a hierarchical clustering process) that clusters electronic messages based on measures of content similarity between pairs of the electronic messages.
  • a clustering process e.g., a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) process, a k-means clustering process, or a hierarchical clustering process
  • DBSCAN Density-Based Spatial Clustering of Applications with Noise
  • k-means clustering process e.g., k-means clustering process
  • hierarchical clustering process e.g., a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) process, a k-means clustering process, or a hierarchical clustering process
  • Each successive electronic message to be clustered is compared to each of the electronic messages in each existing cluster and is added to the cluster containing an electronic message having a similarity with the electronic message being clustered that exceeds a similarity threshold; if the electronic message being clustered has a similarity that exceeds the similarity threshold with the electronic messages of multiple clusters, the multiple clusters are merged into a single cluster. If the similarities between the electronic message being clustered and the previously clustered electronic messages do not exceed the similarity threshold, a new cluster is created for the electronic message being clustered.
  • measures of content similarity compare similarity and diversity of the contents of pairs of electronic messages.
  • the similarity measure corresponds to the Jaccard similarity coefficient, which measures similarity between two electronic messages based on the size of the intersection divided by the size of the union of features of the electronic messages.
  • the computer apparatus extracts lines of content (i.e., whole lines, as opposed to individual words in the lines) from each electronic message as the features that are compared, and measures similarities between electronic messages using line-based comparisons of the extracted content. This line-based feature matching approach improves the accuracy of the clustering process by narrowing the range of matches between electronic messages.
  • the computer apparatus determines a respective grammar for each electronic message cluster (FIG. 5, block 98).
  • the computer apparatus builds a respective generalized suffix tree representation of contents of the electronic messages in the cluster, and ascertains the arrangement of structural elements of the electronic messages in the cluster based on the respective generalized suffix tree representation.
  • the suffix tree representation contains all the suffixes (which are one or more word sequences that are referred to as "phrases") as their keys and positions in the text as their values.
  • the suffix tree representation maintains the order of suffixes in a hierarchical tree structure of nodes that are linearly interconnected from root to leaf node and, for each, suffix, identifies the electronic messages in which the suffix appears and the number of times it appears in each electronic message.
  • the suffix tree representation of the electronic messages in a given cluster is built by applying Ukkonen's algorithm for constructing suffix trees (E. Ukkonen, "On-Line Construction of Suffix Trees," Algorithmica, September 1995, Volume 14, Issue 3, pp. 249-260 (1995)) to a single string formed by concatenating the tokenized contents of all the electronic messages in the given cluster.
  • Ukkonen's algorithm for constructing suffix trees E. Ukkonen, "On-Line Construction of Suffix Trees," Algorithmica, September 1995, Volume 14, Issue 3, pp. 249-260 (1995)
  • FIG. 6C shows a diagrammatic view of an example of a generalized suffix tree 1 12 representation of the contents of respective ones of the pre-processed electronic messages 84 in a particular merchant-specific cluster (see FIG. 6B).
  • the bolded nodes 1 18-126 correspond to static elements that are common to all the electronic messages in a given cluster and the leaves 1 14, 1 16 demarcate the ends of respective ones of the electronic messages.
  • the computer apparatus traverses the generalized suffix tree to identify structural elements of the electronic messages in a given cluster.
  • the computer identifies substrings that correspond to static elements, optional elements, and iterating elements in the electronic messages of the given cluster.
  • substrings that appear in all the electronic messages in the given cluster are considered static elements
  • substrings that appear in a majority (e.g., 90%) of the electronic messages in the given cluster are considered optional elements
  • substrings that appear in all the electronic messages in the given cluster and sometimes repeat within individual ones of the electronic messages are considered iterating elements.
  • Substrings that appear in less than a majority (e.g., below 10%) of the electronic messages of a given cluster are considered electronic message specific elements that are extracted as data fields.
  • the computer apparatus typically applies a series of processes to the tree to detect structural elements of the electronic messages in a given cluster. These processes operate on branches and the special characters that terminate the branches to represent respective ones of the electronic messages in the given cluster.
  • the computer apparatus traverses each branch from the root element until it splits into subbranches. If the subbranches all end with electronic message terminal characters with one terminal character for each subbranch, then the branch is common across all the electronic messages of the cluster and the computer apparatus labels the token sequence corresponding to the branch as a static element.
  • the process of identifying iterating elements is similar to the process of identifying static elements.
  • the computer apparatus locates each branch in the generalized suffix tree that splits into the sub-branches and inspects the terminal character of the branch. Unlike the static detection process where the computer apparatus locates branches that split into the set of terminals matching the set of electronic messages in the cluster, the process of identifying iterating elements involves locating each branch that splits into terminal characters that match all the electronic messages in the given cluster and match at least one of the electronic messages in the given cluster more than once.
  • the computer apparatus applies rules to branches, such as minimum token sequence length and a minimum threshold variance of the repeating token sequence across the electronic messages in the given cluster.
  • the minimum token sequence length rule filters out common words (e.g., "the” and "and") and product names that appear frequently in electronic messages.
  • the minimum threshold variance criterion distinguishes iterating sections from static elements that appear infrequently in the electronic messages of the given cluster. For example, an electronic message that contains a product confirmation for a book have the title "Thanks for your purchase” in an iterating section might be incorrectly identified if the same phrase is used elsewhere in the text of the electronic message, but because the token sequence "Thanks for your purchase” appears very infrequently in this section of the electronic messages of the cluster, its variance value in this section would be very low and therefore would not be misidentified as an iterating section of the electronic messages in which it appeared.
  • each grammar recursively defines allowable arrangements of the tokens corresponding to the structural elements.
  • the computer apparatus typically stores the cluster grammars in one or more data structures in non-transitory computer-readable memory.
  • FIG. 6D shows an example of a grammar 130 that is extracted from the generalized suffix tree representation of FIG. 6C.
  • the grammar preserves the arrangement (e.g., order) of the static elements 132, 134, 136, the optional elements, the iterating elements 138, and the data fields 140, 142, 144.
  • the computer apparatus trains one or more classifiers to label the data fields that are determined for each cluster of electronic messages.
  • the computer apparatus selects a training set 102 of electronic messages.
  • the training set 102 is selected from the collection of pre-processed messages 84; in other examples the training set 102 is selected from another collection of electronic messages that include product purchase information.
  • the electronic messages in the training set 102 are selected without regard to the merchant associated with the electronic messages.
  • a single training set can be used to train the one or more data field labeling classifiers across a wide variety of different merchants, which increases the scalability of the training process as compared with a training process in which a respective set of classifiers is trained for each merchant.
  • the computer apparatus or human operator identifies features in the training set 102 of electronic messages that will be used to train the one or more classifiers 106-1 10 (FIG. 5, block 104).
  • the data fields that are to be labeled by the one or more classifiers are identified in the training electronic messages and used to create the features that will be used to train the one or more classifiers to associate the correct label with the target data fields.
  • a price classifier 106 a price classifier 106
  • an identifier classifier 108 a classifier 108
  • an item description classifier a classifier 106
  • the price classifier 106 is a machine learning classifier that is trained to label ones of the extracted field tokens with respective price labels in a
  • the price classifier 106 is trained to label price token variants with the following order-level price labels:
  • the computer apparatus identifies candidate price field tokens in the training set 102 of electronic messages (e.g., for U.S. dollar based prices, the computer apparatus looks for a "$" symbol followed by a decimal number consisting of an integer part and a two-digit fractional part separated by the decimal separator ".”). For each candidate price, the computer apparatus determines features from the words used in the static token sequence that precedes the candidate price field token.
  • the computer apparatus breaks the static token sequence preceding a particular candidate price into two-word phrases (including special character words demarcating the beginning and end of the sequence, such as ⁇ start> and ⁇ end>) that are used as the features for training the price classifier to label that particular price with the assigned label from the price taxonomy. For example, if the static token sequence preceding an identified price field tokens that is assigned the "total" price label consists of "You paid the total:", the computer apparatus would convert the static token sequence into the following features: " ⁇ start> you”; "you paid”; “paid the”; "the total:”; “total: ⁇ end>".
  • the price classifier automatically learns the weights to assign to the features based on the training data.
  • the price classifier 106 is trained according to a na ' fve Bayes training process.
  • the identifier classifier 108 is a machine learning classifier that is trained to label respective ones of the extracted field tokens with an identifier label in a predetermined identifier classification taxonomy. In some examples, the identifier classifier 108 is trained to label identifier variants into the following identifier labels:
  • the computer apparatus identifies candidate identifier field tokens (e.g., non-decimal numeric and alphanumeric strings) in the training set 102 of electronic messages. For each candidate identifier, the computer apparatus trains the identifier classifier 108 to classify the candidate identifier based on features that include (i) a token extracted from the selected electronic message that corresponds to a static structural element of the respective grammar that immediately precedes the identifier field token in the selected electronic message, and (ii)
  • the computer apparatus breaks the static token sequence preceding a particular candidate identifier into two- word phrases (including special character words demarcating the beginning and end of the sequence, such as ⁇ start> and ⁇ end>) that are used as the features for training the identifier classifier to label that particular price with the assigned label from the identifier taxonomy.
  • the computer apparatus uses characteristics of the candidate identifier field token, including the symbol length of the candidate identifier, the percentage of numeric symbols (also referred to as digits) in the candidate identifier, the location of the candidate identifier in the electronic message (e.g., in the subject field in the header of the electronic message, at the top of the body of the electronic message, or at the bottom of the body of the electronic message).
  • the identifier classifier 108 automatically learns the weights to assign to the features based on the training data.
  • the identifier classifier is 108 trained according to a logistic regression training process.
  • the item description classifier 1 10 is a machine learning classifier that is trained to label respective ones of the extracted field tokens as an item description.
  • the computer apparatus identifies candidate item description field tokens (e.g., word phrase symbol strings) in the training set 102 of electronic messages.
  • the computer apparatus trains the classifier to classify the candidate identifier based on features that include, for example: the percentage of phrases that the candidate item description has in common with a known item description (e.g., an item description in a database of products descriptions, such as a list of products previously purchased by the recipient of the electronic message or a product catalogue associated with the merchant associated with the electronic message); the percentage of phrases that the candidate item description has in common with a compilation of phrases that are known to not be part of product descriptions (e.g., identifier related phrases, such as "Order No.”, and order-level price related phrases, such as "Total Price", are examples of phrases that typically are included in the compilation as not corresponding to item descriptions); and the percentage of capitalized symbols in the candidate item description field tokens.
  • a known item description e.g., an item description in a database of products descriptions, such as a list of products previously purchased by the recipient of the electronic message or a product catalogue associated with the merchant associated with the electronic message
  • the item description classifier 1 10 automatically learns the weights to assign to the features based on the training data.
  • the item description classifier 1 10 is trained according to a logistic regression training process.
  • the computer apparatus in addition to building the price classifier 106, the identifier classifier 108, and the item description classifier 1 10, the computer apparatus also applies heuristics to classify candidate item-level quantity field tokens and candidate item-level price field tokens.
  • classification heuristic is the magnitude of the numeric field token in an iterating section of an electronic message.
  • An example of an item-level price classification heuristic is a phrase of one or more words (e.g., "item price") that appears in a static token sequence that precedes a candidate price field token in an iterating section of an electronic message.
  • FIG. 7 shows a method of by which an example of a structure learning parser extracts product purchase information from an electronic message.
  • computer apparatus is programmed to perform the method of FIG. 7.
  • the computer apparatus matches a selected electronic message to one of multiple clusters of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar defining an arrangement of structural elements of electronic messages in the cluster (FIG. 7, block 150).
  • the computer apparatus segments the selected electronic message into tokens that include product purchase information (FIG. 7, block 152).
  • the computer apparatus parses the tokens in accordance with the grammar associated with the cluster matched to the selected electronic message, where the parsing process includes identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar and extracting unidentified ones of the tokens as field tokens (FIG. 7, block 154).
  • the computer apparatus determines classification features of the selected electronic message (FIG. 7, block 156).
  • the computer apparatus classifies respective ones of the extracted field tokens with respective product purchase relevant labels based on respective ones of the
  • the computer apparatus typically stores associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens in one or more data structures (e.g., a database) permitting computer-based generation of actionable purchase history information.
  • data structures e.g., a database
  • the structure learner parser includes a product purchase information token parser that performs the matching, segmenting, and parsing operations of blocks 150-154 of FIG. 7, and a product purchase information token classifier that performs the determining and classifying operations of blocks 156-158 of FIG. 7.
  • a software application that programs a computer to perform the matching, segmenting, and parsing operations of blocks 150-154 implements the product purchase information token parser, where a different respective software module includes a respective set of computer-readable instructions for performing the matching, segmenting, and parsing operations.
  • a machine learning software application that programs a computer to perform the determining and classifying operations of blocks 156-158 implements the product purchase information token classifier.
  • FIG. 8 shows a flow diagram of an example of the electronic message parsing process of FIG. 7.
  • the computer apparatus retrieves from a data store (e.g., a database) electronic messages 160 that have been transmitted between physical network nodes to convey product purchase information to designated recipients.
  • a data store e.g., a database
  • FIG. 9A shows an example 161 of one of the electronic messages 160.
  • the computer apparatus pre-processes the electronic messages 160 (FIG. 8, block 162).
  • the computer apparatus tokenizes the text-based contents of the electronic messages by extracting contiguous strings of symbols (e.g., symbols representing alphanumeric characters) separated by white spaces.
  • the contiguous symbol strings typically correspond to words and numbers.
  • the computer apparatus then replaces tokens that match patterns for integers and real numbers (typically prices) in the electronic messages 160 with wildcard tokens.
  • FIG. 9B shows an example of a pre-processed version 163 of the electronic message 161 in which integers have been replaced with the wildcard token "INT" and real numbers have been replaced with the wildcard token "FLOAT".
  • the computer apparatus For each of the pre-processed messages 164 (FIG. 8, block 166), the computer apparatus attempts to determine a merchant that is associated with the electronic message (FIG. 8, block 168). For some types of electronic messages, the computer apparatus attempts to determine the merchant from header information that includes supplemental information about the electronic message. For example, an electronic mail (e-mail) message includes header information that indicates the sender, the recipient, and the subject of the electronic mail message, and a text message typically includes a Sender ID that indicates the sender of the message. In some cases, the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message.
  • e-mail electronic mail
  • a text message typically includes a Sender ID that indicates the sender of the message.
  • the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message
  • the computer apparatus attempts to match the electronic message to one of multiple clusters of electronic messages 170 that is associated with the determined merchant.
  • the set 170 of clusters corresponds to one of the merchant-specific sets 94, 96 of electronic message clusters into which the electronic messages 84 were grouped in the structure learning parser building process described above in connection with FIG. 5.
  • the computer apparatus After determining the set 170 of clusters of electronic messages that is associated with the merchant associated with the electronic message, the computer apparatus matches the electronic message to a respective one of the clusters in the set 170 of clusters (FIG. 8, block 172). In some examples, the computer apparatus associating each of the clusters in the determined set 170 with a respective similarity score that indicates a degree of similarity between contents of the selected electronic message and contents of the electronic messages of the cluster. The computer apparatus then matches the electronic message to the cluster 174 in the set 170 that is associated with a highest one of the similarity scores.
  • each similarity score compares similarity and diversity of the contents of the electronic message and contents of a respective one of the electronic messages of the associated cluster.
  • measures of content similarity compare similarity and diversity of the contents of pairs of electronic messages.
  • the similarity measure corresponds to the Jaccard similarity coefficient, which measures similarity between two electronic messages based on the size of the intersection divided by the size of the union of features of the electronic messages.
  • the computer apparatus extracts lines of content (i.e., whole lines, as opposed to individual words in the lines) from each electronic message as the features that are compared, and measures similarities between electronic messages using line-based comparisons of the extracted content. This line-based feature matching approach improves the accuracy of the clustering process by narrowing the range of matches between electronic messages.
  • each cluster in the matched merchant-specific set 170 of clusters is associated with a respective grammar that defines an arrangement of structural elements of electronic messages in the cluster.
  • the computer apparatus determines the grammar 176 that is associated with the cluster 174 that is matched to the electronic message.
  • FIG. 9C shows an example of the grammar 176 that is matched to the electronic message.
  • the grammar 176 corresponds to the grammar 130 shown in FIG. 6D.
  • the grammar preserves the arrangement (e.g., order) of the static elements 132, 134, 136, the optional elements, the iterating elements 138, and the data fields 140, 142, 144.
  • the grammar recursively defines allowable arrangements of the tokens corresponding to the structural elements.
  • the computer apparatus parses the electronic message according to the determined grammar 176 (FIG. 8, block 178). In this process, the computer apparatus matches the sequence of structural elements in the grammar to the tokens identified in the pre-processed version of the electronic message. The result is an ordered arrangement of tokens 224 matched to respective ones of the structural elements of the grammar and a set of unidentified ones of the tokens that are extracted as data fields.
  • FIG. 9D shows an example of an abstract syntax tree 180 (AST) of structural elements 34, 36, 52, and 58 (which correspond to the structural elements shown in FIG. 3) and data fields 182, 184, 186 that have been parsed from the pre- processed electronic message 163 of FIG. 9B according to the grammar 176 of FIG. 9C.
  • FIG. 9E shows an example of a visualization 182 of the electronic message 161 of FIG. 9A showing data fields 184-222 that have been parsed from the pre-processed electronic message 163 of FIG. 9B as a result of traversing the syntax tree 180 of FIG. 9D and extracting the unidentified ones of the tokens that do not match any of the structural elements in the grammar as data fields.
  • the computer apparatus determines a respective set of additional features from each electronic message (FIG. 8, block 226).
  • the determined features correspond to the features that are extracted during the training process described above.
  • the computer apparatus applies respective sets of the parsed tokens and extracted features to the order-level price classifier 106, the identifier classifier 108, the item description classifier 1 10, and the item-level classification heuristics 228 described above.
  • the price classifier 106 labels the extracted candidate price data field tokens with respective ones the following order-level price labels: shipping; tax; total; sub-total; and discount.
  • the identifier classifier 108 labels respective ones of the extracted candidate price data field tokens with respective ones of the following identifier labels: order number; tracking number; and SKU.
  • the item description classifier labels respective ones of the extracted data field tokens as item descriptions.
  • the computer apparatus applies the item-level classification heuristics to label respective ones of the extracted data field tokens item-level quantity and price labels.
  • the computer apparatus After classification, the computer apparatus outputs an extracted set of price data, identifier data, item description data, and item-level quantity and price data for each electronic message.
  • the computer apparatus typically stores this product purchase information in non-transitory computer-readable memory.
  • the product purchase information may be stored in one or more data structures that include associations between the product purchase relevant labels and the product purchase information of the respective ones of the extracted product purchase data field tokens.
  • the extracted product purchase information may be used in a wide variety of useful and tangible real-world applications.
  • the extracted product purchase information is processed, for example, to display information about the users' purchases, including information for tracking in-transit orders, information for accessing purchase details, and aggregate purchase summary information.
  • the extracted product purchase information is processed, for example, to assist in targeting advertising to consumers based on their purchase histories.
  • the extracted product purchase information is processed to provide, for example, anonymous item-level purchase detail across retailers, categories, and devices.
  • FIG. 10 shows an example of a graphical user interface 230 presenting a set of product purchase information for a particular consumer (i.e., Consumer A).
  • product purchase information for a set of products purchased by a particular consumer i.e., Consumer A.
  • Consumer A is present by product in reverse chronological order by order date to provide the purchase history for Consumer A.
  • the product purchase information includes Order Date, Item Description, Price, Merchant, and Status. This presentation of product purchase information allows Consumer A to readily determine information about the products in the purchase history, such as prices paid and delivery status. In this way, Consumer A is able to readily determine what he bought, where he bought it, and when it will arrive without having to review the original electronic messages (e.g., e- mail messages) containing the product purchase information.
  • FIG. 1 1 shows an exemplary embodiment of computer apparatus that is implemented by a computer system 320.
  • the computer system 320 includes a processing unit 322, a system memory 324, and a system bus 326 that couples the processing unit 322 to the various components of the computer system 320.
  • the processing unit 322 may include one or more data processors, each of which may be in the form of any one of various commercially available computer processors.
  • the system memory 324 includes one or more computer-readable media that typically are associated with a software application addressing space that defines the addresses that are available to software applications.
  • the system memory 324 may include a read only memory (ROM) that stores a basic input/output system (BIOS) that contains startup routines for the computer system 320, and a random access memory (RAM).
  • the system bus 326 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, MicroChannel, ISA, and EISA.
  • the computer system 320 also includes a persistent storage memory 328 (e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 326 and contains one or more computer-readable media disks that provide non-volatile or persistent storage for data, data structures and computer-executable instructions.
  • a persistent storage memory 328 e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks
  • a user may interact (e.g., input commands or data) with the computer system 320 using one or more input devices 330 (e.g. one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors, and touch pads). Information may be presented through a graphical user interface (GUI) that is presented to the user on a display monitor 332, which is controlled by a display controller 334.
  • GUI graphical user interface
  • the computer system 320 also may include other input/output hardware (e.g., peripheral output devices, such as speakers and a printer).
  • the computer system 320 connects to other network nodes through a network adapter 336 (also referred to as a "network interface card” or NIC).
  • NIC network interface card
  • a number of program modules may be stored in the system memory 324, including application programming interfaces 338 (APIs), an operating system (OS) 340 (e.g., the Windows® operating system available from Microsoft Corporation of Redmond, Washington U.S.A.), software applications 341 including one or more software applications programming the computer system 320 to perform one or more of the process of building a structure learning parser and the process of parsing electronic messages with a structure learning parser, drivers 342 (e.g., a GUI driver), network transport protocols 344, and data 346 (e.g., input data, output data, program data, a registry, and configuration settings).
  • APIs application programming interfaces 338
  • OS operating system
  • software applications 341 including one or more software applications programming the computer system 320 to perform one or more of the process of building a structure learning parser and the process of parsing electronic messages with a structure learning parser
  • drivers 342 e.g., a GUI driver
  • network transport protocols 344 e.g.
  • the one or more server network nodes of the product providers 18, 42, and the recommendation provider 44 are implemented by respective general-purpose computer systems of the same type as the client network node 320, except that each server network node typically includes one or more server software applications.
  • one or more of the product purchase information provider 12, the product merchants 14, the product delivery providers 16, the message providers 18, and the product purchase information consumers 20 shown in FIG. 1 are implemented by server network nodes that correspond to the computer apparatus 320.
  • inventions described herein provide improved systems, methods, and computer-readable media for extracting product purchase information from electronic messages.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Improved systems and methods for extracting product purchase information from electronic messages (22) transmitted between physical network nodes to convey product purchase information to designated recipients. These examples provide a product purchase information extraction service (12) that is able to extract product purchase information from electronic messages with high precision across a wide variety of electronic message formats and thereby solve the practical problems that have arisen as a result of the proliferation of different electronic message formats (32) used by individual merchants (14) and across different merchants and different languages. In this regard, these examples are able to automatically learn the structures and semantics of different message formats (32), which accelerates the ability to support new message sources, new markets, and different languages.

Description

EXTRACTING PRODUCT PURCHASE INFORMATION FROM
ELECTRONIC MESSAGES
BACKGROUND
[0001 ] People purchase products from many different merchants using a variety of different payment options. The transactions for these purchases typically are confirmed by physical in-store receipts or by electronic messages addressed to the purchasers' messaging accounts (e.g., a purchaser's electronic mail account). The large number and diversity of confirmation types makes it difficult for people to track their purchases and obtain a comprehensive understanding of their purchase histories.
[0002] In addition, the large diversity of merchants from which people purchase products makes it difficult for merchants to obtain sufficient purchase history data to develop accurate customer profiles. Even assuming that a person uses a common identifier (e.g., a loyalty card or credit card) for all his or her purchases, these purchases typically are tracked only by the merchant that issued the identifier to the customer. This lack of information about the customer limits a merchant's ability to effectively target its promotions in ways that will encourage them to purchase the merchant's product offerings.
[0003] The large diversity of merchants also leads to a large diversity in confirmation formats, making it difficult and expensive to extract product purchase information from purchase confirmations.
DESCRIPTION OF DRAWINGS
[0004] FIG. 1 is a diagrammatic view of an example of a network
communication environment.
[0005] FIG. 2 is a diagrammatic view of electronic message processing stages performed by an example of a product purchase information provider.
[0006] FIG. 3A is a diagrammatic view of an example of an electronic message.
[0007] FIG. 3B is a diagrammatic view of the electronic message of FIG. 3A showing data fields that have been identified according to an example of a product purchase information extraction process.
[0008] FIG. 4 is a flow diagram of an example of a process of training a structure learning parser for labeling data fields of an electronic message. [0009] FIG. 5 is a flow diagram of an example of the structure learning parser training process of FIG. 4.
[0010] FIG. 6A is a diagrammatic view of an example of a set of electronic messages.
[001 1 ] FIG. 6B is a diagrammatic view of the set of electronic messages of FIG. 6A after being pre-processed.
[0012] FIG. 6C is a diagrammatic view of a generalized suffix tree
representation of contents of respective ones of the pre-processed electronic messages of FIG. 6B.
[0013] FIG. 6D is a diagrammatic view of a grammar extracted from the generalized suffix tree representation of FIG. 6C.
[0014] FIG. 7 is a flow diagram of an example of a process of extracting product purchase information from electronic messages.
[0015] FIG. 8 is a flow diagram of an example of the product purchase information extraction process of FIG. 7.
[0016] FIG. 9A is a diagrammatic view of an example of an electronic message.
[0017] FIG. 9B is a diagrammatic view of an example of the electronic message of FIG. 9A after being pre-processed.
[0018] FIG. 9C is a diagrammatic view of a grammar matched to the pre- processed electronic message of FIG. 9B.
[0019] FIG. 9D is a diagrammatic view of a syntax tree parsed from the pre- processed electronic message of FIG. 9B according to the grammar of FIG. 9C.
[0020] FIG. 9E is a diagrammatic view of an example of a visualization of the electronic message of FIG. 9A showing data fields that are identified in the syntax tree shown in FIG. 9D.
[0021 ] FIG. 10 is a diagrammatic view of an example of a graphical user interface presenting aggregated product purchase information.
[0022] FIG. 1 1 is a block diagram of an example of computer apparatus.
DETAILED DESCRIPTION
[0023] In the following description, like reference numbers are used to identify like elements. Furthermore, the drawings are intended to illustrate major features of exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
1 . DEFINITION OF TERMS
[0024] A "product" is any tangible or intangible good or service that is available for purchase or use.
[0025] "Product purchase information" is information related to the purchase of a product. Product purchase information includes, for example, purchase
confirmations (e.g., receipts), product order information (e.g., merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, and order total), and product shipping information (e.g., billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number).
[0026] An "electronic message" is a persistent text based information record sent from a sender to a recipient between physical network nodes and stored in non- transitory computer-readable memory. An electronic message may be structured message (e.g., a hypertext markup language (HTML) message that includes structured tag elements) or unstructured (e.g., a plain text message).
[0027] A "computer" is any machine, device, or apparatus that processes data according to computer-readable instructions that are stored on a computer-readable medium either temporarily or permanently. A "computer operating system" is a software component of a computer system that manages and coordinates the performance of tasks and the sharing of computing and hardware resources. A "software application" (also referred to as software, an application, computer software, a computer application, a program, and a computer program) is a set of instructions that a computer can interpret and execute to perform one or more specific tasks. A "data file" is a block of information that durably stores data for use by a software application.
[0028] The term "computer-readable medium" (also referred to as "memory") refers to any tangible, non-transitory device capable storing information (e.g.,
instructions and data) that is readable by a machine (e.g., a computer). Storage devices suitable for tangibly embodying such information include, but are not limited to, all forms of physical, non-transitory computer-readable memory, including, for example, semiconductor memory devices, such as random access memory (RAM), EPROM, EEPROM, and Flash memory devices, magnetic disks such as internal hard disks and removable hard disks, magneto-optical disks, DVD-ROM/RAM, and CD-ROM/RAM.
[0029] A "network node" (also referred to simply as a "node") is a physical junction or connection point in a communications network. Examples of network nodes include, but are not limited to, a terminal, a computer, and a network switch. A "server node" is a network node that responds to requests for information or service. A "client node" is a network node that requests information or service from a server node.
[0030] As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on.
2. EXTRACTING PRODUCT PURCHASE INFORMATION FROM ELECTRONIC MESSAGES
A. INTRODUCTION
[0031 ] The examples that are described herein provide improved systems and methods for extracting product purchase information from electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients by solving practical problems that have arisen as a result of the proliferation of different electronic message formats used by individual merchants and across different merchants and different languages. These examples provide a product purchase information extraction service that is able to extract product purchase information from electronic messages with high precision across a wide variety of electronic message formats. In this regard, these examples are able to automatically learn the structures and semantics of different message formats, which accelerates the ability to support new message sources, new markets, and different languages.
[0032] By these improved systems and methods, product purchase
information can be extracted from a wide variety of electronic message types and aggregated to provide individuals with enhanced tools for visualizing and organizing their purchase histories and to provide merchants and other organizations improved cross-merchant purchase graph information across different consumer demographics to enable targeted and less intrusive advertising and other marketing strategies. These improved systems and methods can be deployed to monitor consumer purchases over time to obtain updated purchase history information that can be aggregated for an individual consumer or across many consumers to provide actionable information that directs consumer behavior and organizational marketing strategies. For example, these improved systems and methods can organize disparate product purchase information extracted from individual electronic messages into actionable data that can be used by a consumer to organize her prior purchases and enhance her understanding of her purchasing behavior and can be used by merchants and other organizations to improve the accuracy and return-on-investment of their marketing campaigns.
[0033] In specific examples, these systems and methods include improved special purpose computer apparatus programmed to build a structure learning parser that automatically learns the structure of an electronic message and accurately parses product purchase information from the electronic message. These systems and methods also include improved special purpose computer apparatus programmed to function as a structure learning parser that automatically learns the structure of an electronic message and accurately parses product purchase information from the electronic message.
B. EXEMPLARY OPERATING ENVIRONMENT
[0034] FIG. 1 shows an example of a network communications environment 10 that includes a network 1 1 that interconnects a product purchase information provider 12, one or more product merchants 14 that sell products, one or more product delivery providers 16 that deliver purchased products to purchasers, one or more message providers 18 that provide message handling services, and one or more product purchase information consumers 20 that purchase product purchase
information and services from the product purchase information provider 12.
[0035] The network 1 1 may include any of a local area network (LAN), a metropolitan area network (MAN), and a wide area network (WAN) (e.g., the internet). The network 1 1 typically includes a number of different computing platforms and transport facilities that support the transmission of a wide variety of different media types (e.g., text, voice, audio, and video) between network nodes 14 and the product provider 18. Each of the product purchase information provider 12, the product merchants 14, the product delivery providers 16, the message providers 18, and the product purchase information consumers 20 typically connects to the network 1 1 via a network node (e.g., a client computer or a server computer) that includes a tangible computer-readable memory, a processor, and input/output (I/O) hardware (which may include a display). [0036] One or more of the product merchants 14 typically allow consumers and businesses to directly purchase products over the network 22 using a network enabled software application, such as a web browser. One or more of the of the product merchants 14 also may allow consumers and businesses to purchase products in a physical retail establishment. In either case, after a product purchase transaction has been completed, a product merchant may send a product purchase confirmation electronic message to a messaging address associated with the purchaser. The product purchase confirmation message may include, for example, product order information such as merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, and order total. The product merchant also may arrange to have the product delivered by one of the product delivery providers 16. Depending on the type of product that was purchased, the product delivery provider 16 may deliver the product to the purchaser physically or electronically. In either case, the product delivery provider 16 or the product merchant 14 may send a delivery notification electronic message to the messaging address associated with the purchaser. The delivery notification electronic message may include, for example, product shipping information such as product order information, billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number.
[0037] In general, the purchaser's messaging address may be any type of network address to which electronic messages may be sent. Examples of such messaging addresses include electronic mail (e-mail) addresses, text messaging addresses (e.g., a sender identifier, such as a telephone number or a user identifier for a texting service), a user identifier for a social networking service, and a facsimile telephone number. The product purchase related electronic messages typically are routed to the purchaser through respective ones of the message providers 18 associated with the purchaser's messaging address. The message providers 18 typically store the purchasers' electronic messages in respective message folder data structures in a database.
[0038] The product purchase information provider 12 extracts product purchase information from the electronic messages of product purchasers. In some examples, the product purchase information provider 12 obtains authorization from the product purchasers to access their respective message folders that are managed by the message providers 18. In other examples, product purchasers allow the product purchase information provider 12 to access their electronic messages that are stored on their local communication devices (e.g., personal computer or mobile phone).
[0039] Referring to FIG. 2, after obtaining authorization to access the electronic messages 22 of a purchaser, the product purchase information provider 12 processes the electronic messages 22 through a number of stages before producing processed data 24 that is provided to the product purchase information consumers 20. These stages include a message discovery stage 26, a field extraction stage 28, and a data processing stage 30.
[0040] In the message discovery stage 26, the product purchase information provider 12 identifies the ones of the electronic messages 22 that relate to product purchases. In some examples, rule-based filters and machine learning classifiers are used to identify product purchase related electronic messages.
[0041 ] In the field extraction stage 28, the product purchase information provider 12 extracts product purchase information from the identified ones of the electronic messages 22. Examples of such product purchase information include merchant name, order number, order date, product description, product name, product quantity, product price, sales tax, shipping cost, order total, billing address, shipping company, shipping address, estimated shipping date, estimated delivery date, and tracking number.
[0042] In the data processing stage 30, the product purchase information provider 12 processes the extracted product purchase information for according to the different types of product purchase information consumers. For example, for individual users, the extracted product purchase information is processed, for example, to display information about the users' purchases, including information for tracking in-transit orders, information for accessing purchase details, and aggregate purchase summary information. For advertisers, the extracted product purchase information is processed, for example, to assist in targeting advertising to consumers based on their purchase histories. For market analysts, the extracted product purchase information is processed to provide, for example, anonymous item-level purchase detail across retailers, categories, and devices. C. EXTRACTING PRODUCT PURCHASE INFORMATION
I. INTRODUCTION
[0043] In the examples explained in detail below, the product purchase information provider 12 includes a structure learning parser that extracts product purchase information from an electronic message using a grammar based parsing approach to identify structural elements and data fields in the electronic message and a machine learning approach to classify the data fields. The structural elements correspond to static, optional, and iterating elements that commonly appear in a particular type of product purchase related electronic message, whereas the data fields contain the variable information at least some of which corresponds to the product purchase information that is extracted.
[0044] FIG. 3A shows an example of an electronic message 32 for a product order, and FIG. 3B shows the electronic message 32 with data fields (marked with bold font) that have been identified according to an example of a product purchase
information extraction process. In this example, the structural elements are: an introductory "Dear" 36; standard informational text 36 (i.e., "Thank you for placing your order ... once your item has been shipped."); "Order Number:" 38; "Order Summary" 40; "Product Subtotal:" 42; "Discounts:"; "Shipping Charges:" 46; "Tax:" 48; "Total:" 50; "Part No" 52; "Product Price" 54; "Discount" 56; "Part No" 58; "Product Price" 60; and
"Discount" 62. The structural elements 34-50 are static elements and the sets of structural elements 52-56 and 58-62 include the same static elements that repeat in respective iterating elements. The non-structural elements (e.g., prices, order number, and part numbers) of the electronic message are data fields that are extracted and classified by the structure learning parser component of the product purchase
information provider 12.
II. BUILDING A STRUCTURE LEARNING PARSER
[0045] FIG. 4 shows an example of a method of building a structure learning parser that extracts product purchase information from an electronic message. In the illustrated examples, computer apparatus is programmed to perform the method of FIG. 4.
[0046] In accordance with the method of FIG. 4, the computer apparatus groups electronic messages into respective clusters based on similarities between the electronic messages, the electronic messages having been transmitted between physical network nodes to convey product purchase information to designated recipients (FIG. 4, block 70). For each cluster, the computer apparatus extracts a respective grammar defining an arrangement of structural elements of the electronic messages in the cluster (FIG. 4, block 72). Based on training data that includes fields of electronic messages comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, the computer apparatus builds a classifier that classifies fields of a selected electronic message that includes product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars matched to the selected electronic message (FIG. 4, block 74). The computer apparatus typically stores the grammars and the classifier in non-transitory computer-readable memory in one or more data structures permitting computer-based parsing of product purchase information from electronic messages.
[0047] In some examples, a structure learner parser builder includes a product purchase information grammar extractor that performs the grouping and extracting operations of blocks 70-72 of FIG. 4, and a product purchase information token classifier trainer that performs the classifier building operation of block 74 of FIG. 4. In some examples, the structure learner parser builder is a software application that programs a computer to perform the grouping and extracting operations of blocks 70-72 implements the product purchase information grammar extractor, where a different respective software module includes a respective set of computer-readable instructions for performing the grouping and extracting operations. In some examples, the product purchase information token classifier trainer is a machine learning training software application that programs a computer to perform the classifier building operation of block 74.
[0048] FIG. 5 shows a flow diagram of an example of the structure learning parser building process of FIG. 4.
[0049] In this example, the computer apparatus retrieves from a data store (e.g., a database) electronic messages 80 that have been transmitted between physical network nodes to convey product purchase information to designated recipients. FIG. 6A shows an example 81 of one of the electronic messages 80. [0050] The computer apparatus pre-processes the electronic messages 80 (FIG. 5, block 82). In this process, the computer apparatus tokenizes the text-based contents of the electronic messages by extracting contiguous strings of symbols (e.g., symbols representing alphanumeric characters) separated by white spaces. The contiguous symbol strings typically correspond to words and numbers. The computer apparatus then replaces tokens that match patterns for integers and real numbers (typically prices) in the electronic messages 80 with wildcard tokens. FIG. 6B shows an example of a pre-processed version 83 of the electronic message 81 in which integers have been replaced with the wildcard token "INT" and real numbers have been replaced with the wildcard token "FLOAT". The replacement of the variable integer and real number elements of each electronic message with wildcard tokens improves the detection of iterating elements of the electronic messages.
[0051 ] For each of the pre-processed messages 84 (FIG. 5, block 86), the computer apparatus attempts to determine a merchant that is associated with the electronic message (FIG. 5, block 90). For some types of electronic messages, the computer apparatus attempts to determine the merchant from header information that includes supplemental information about the electronic message. For example, an electronic mail (e-mail) message includes header information that indicates the sender, the recipient, and the subject of the electronic mail message, and a text message typically includes a Sender ID that indicates the sender of the message. In some cases, the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message.
[0052] The computer apparatus clusters the electronic messages by merchant (FIG. 5, block 92). In this process, the computer apparatus sorts the electronic messages into groups by message sender, where each message sender is associated with a respective one of the groups of electronic messages. For each group of merchant-specific electronic messages, the computer apparatus clusters the electronic messages within the group into one or more clusters based on similarities between the electronic messages. The result is a respective set 94, 96 of clusters (cluster 1 ,1 ... cluster 1 ,n, cluster k,1 ... cluster k,m) for each merchant, where each cluster consists of electronic messages that are similar to one another. [0053] In some examples, for each merchant-specific set of electronic messages, the computer apparatus applies to the electronic messages a clustering process (e.g., a Density-Based Spatial Clustering of Applications with Noise (DBSCAN) process, a k-means clustering process, or a hierarchical clustering process) that clusters electronic messages based on measures of content similarity between pairs of the electronic messages. In an example of this process, electronic messages are processed serially. A new cluster is created for the first electronic message. Each successive electronic message to be clustered is compared to each of the electronic messages in each existing cluster and is added to the cluster containing an electronic message having a similarity with the electronic message being clustered that exceeds a similarity threshold; if the electronic message being clustered has a similarity that exceeds the similarity threshold with the electronic messages of multiple clusters, the multiple clusters are merged into a single cluster. If the similarities between the electronic message being clustered and the previously clustered electronic messages do not exceed the similarity threshold, a new cluster is created for the electronic message being clustered.
[0054] In some examples, measures of content similarity compare similarity and diversity of the contents of pairs of electronic messages. In some of these examples, the similarity measure corresponds to the Jaccard similarity coefficient, which measures similarity between two electronic messages based on the size of the intersection divided by the size of the union of features of the electronic messages. In some of these examples, the computer apparatus extracts lines of content (i.e., whole lines, as opposed to individual words in the lines) from each electronic message as the features that are compared, and measures similarities between electronic messages using line-based comparisons of the extracted content. This line-based feature matching approach improves the accuracy of the clustering process by narrowing the range of matches between electronic messages.
[0055] After the electronic messages have been grouped into the merchant- specific sets 94, 96 of electronic message clusters, the computer apparatus determines a respective grammar for each electronic message cluster (FIG. 5, block 98).
[0056] In some examples, for each cluster, the computer apparatus builds a respective generalized suffix tree representation of contents of the electronic messages in the cluster, and ascertains the arrangement of structural elements of the electronic messages in the cluster based on the respective generalized suffix tree representation. The suffix tree representation contains all the suffixes (which are one or more word sequences that are referred to as "phrases") as their keys and positions in the text as their values. The suffix tree representation maintains the order of suffixes in a hierarchical tree structure of nodes that are linearly interconnected from root to leaf node and, for each, suffix, identifies the electronic messages in which the suffix appears and the number of times it appears in each electronic message. In some examples, the suffix tree representation of the electronic messages in a given cluster is built by applying Ukkonen's algorithm for constructing suffix trees (E. Ukkonen, "On-Line Construction of Suffix Trees," Algorithmica, September 1995, Volume 14, Issue 3, pp. 249-260 (1995)) to a single string formed by concatenating the tokenized contents of all the electronic messages in the given cluster.
[0057] FIG. 6C shows a diagrammatic view of an example of a generalized suffix tree 1 12 representation of the contents of respective ones of the pre-processed electronic messages 84 in a particular merchant-specific cluster (see FIG. 6B). In this example, the bolded nodes 1 18-126 correspond to static elements that are common to all the electronic messages in a given cluster and the leaves 1 14, 1 16 demarcate the ends of respective ones of the electronic messages.
[0058] The computer apparatus traverses the generalized suffix tree to identify structural elements of the electronic messages in a given cluster. In some examples, the computer identifies substrings that correspond to static elements, optional elements, and iterating elements in the electronic messages of the given cluster. In general, substrings that appear in all the electronic messages in the given cluster are considered static elements, substrings that appear in a majority (e.g., 90%) of the electronic messages in the given cluster are considered optional elements, and substrings that appear in all the electronic messages in the given cluster and sometimes repeat within individual ones of the electronic messages are considered iterating elements. Substrings that appear in less than a majority (e.g., below 10%) of the electronic messages of a given cluster are considered electronic message specific elements that are extracted as data fields.
[0059] The computer apparatus typically applies a series of processes to the tree to detect structural elements of the electronic messages in a given cluster. These processes operate on branches and the special characters that terminate the branches to represent respective ones of the electronic messages in the given cluster.
[0060] In one exemplary process for identifying static elements, the computer apparatus traverses each branch from the root element until it splits into subbranches. If the subbranches all end with electronic message terminal characters with one terminal character for each subbranch, then the branch is common across all the electronic messages of the cluster and the computer apparatus labels the token sequence corresponding to the branch as a static element.
[0061 ] The process of identifying iterating elements is similar to the process of identifying static elements. In one example, the computer apparatus locates each branch in the generalized suffix tree that splits into the sub-branches and inspects the terminal character of the branch. Unlike the static detection process where the computer apparatus locates branches that split into the set of terminals matching the set of electronic messages in the cluster, the process of identifying iterating elements involves locating each branch that splits into terminal characters that match all the electronic messages in the given cluster and match at least one of the electronic messages in the given cluster more than once. In some examples, the computer apparatus applies rules to branches, such as minimum token sequence length and a minimum threshold variance of the repeating token sequence across the electronic messages in the given cluster. The minimum token sequence length rule filters out common words (e.g., "the" and "and") and product names that appear frequently in electronic messages. The minimum threshold variance criterion distinguishes iterating sections from static elements that appear infrequently in the electronic messages of the given cluster. For example, an electronic message that contains a product confirmation for a book have the title "Thanks for your purchase" in an iterating section might be incorrectly identified if the same phrase is used elsewhere in the text of the electronic message, but because the token sequence "Thanks for your purchase" appears very infrequently in this section of the electronic messages of the cluster, its variance value in this section would be very low and therefore would not be misidentified as an iterating section of the electronic messages in which it appeared.
[0062] The structural elements that are identified by traversing the
generalized suffix tree for a given cluster are incorporated into a data structure (referred to herein as a "grammar") that preserves the sequence of the static, optional, and iterating elements in the generalized suffix tree. In some examples, each grammar recursively defines allowable arrangements of the tokens corresponding to the structural elements. The computer apparatus typically stores the cluster grammars in one or more data structures in non-transitory computer-readable memory.
[0063] FIG. 6D shows an example of a grammar 130 that is extracted from the generalized suffix tree representation of FIG. 6C. The grammar preserves the arrangement (e.g., order) of the static elements 132, 134, 136, the optional elements, the iterating elements 138, and the data fields 140, 142, 144.
[0064] Referring back to FIG. 5, in addition to determining a respective grammar for each cluster in each of the merchant-specific sets 94, 96 of clusters, the computer apparatus trains one or more classifiers to label the data fields that are determined for each cluster of electronic messages.
[0065] In some examples, the computer apparatus selects a training set 102 of electronic messages. In the illustrated example, the training set 102 is selected from the collection of pre-processed messages 84; in other examples the training set 102 is selected from another collection of electronic messages that include product purchase information. In some examples, the electronic messages in the training set 102 are selected without regard to the merchant associated with the electronic messages. As a result, a single training set can be used to train the one or more data field labeling classifiers across a wide variety of different merchants, which increases the scalability of the training process as compared with a training process in which a respective set of classifiers is trained for each merchant.
[0066] The computer apparatus or human operator (e.g., a machine learning engineer) identifies features in the training set 102 of electronic messages that will be used to train the one or more classifiers 106-1 10 (FIG. 5, block 104). In this process, the data fields that are to be labeled by the one or more classifiers are identified in the training electronic messages and used to create the features that will be used to train the one or more classifiers to associate the correct label with the target data fields.
[0067] In the illustrated example, three classifiers are built: a price classifier 106, an identifier classifier 108, and an item description classifier.
[0068] The price classifier 106 is a machine learning classifier that is trained to label ones of the extracted field tokens with respective price labels in a
predetermined price classification taxonomy. In some examples, the price classifier 106 is trained to label price token variants with the following order-level price labels:
shipping; tax; total; sub-total; and discount.
[0069] In some examples, the computer apparatus identifies candidate price field tokens in the training set 102 of electronic messages (e.g., for U.S. dollar based prices, the computer apparatus looks for a "$" symbol followed by a decimal number consisting of an integer part and a two-digit fractional part separated by the decimal separator "."). For each candidate price, the computer apparatus determines features from the words used in the static token sequence that precedes the candidate price field token. In some examples, the computer apparatus breaks the static token sequence preceding a particular candidate price into two-word phrases (including special character words demarcating the beginning and end of the sequence, such as <start> and <end>) that are used as the features for training the price classifier to label that particular price with the assigned label from the price taxonomy. For example, if the static token sequence preceding an identified price field tokens that is assigned the "total" price label consists of "You paid the total:", the computer apparatus would convert the static token sequence into the following features: "<start> you"; "you paid"; "paid the"; "the total:"; "total: <end>". During the training process, the price classifier automatically learns the weights to assign to the features based on the training data. In some examples, the price classifier 106 is trained according to a na'fve Bayes training process.
[0070] The identifier classifier 108 is a machine learning classifier that is trained to label respective ones of the extracted field tokens with an identifier label in a predetermined identifier classification taxonomy. In some examples, the identifier classifier 108 is trained to label identifier variants into the following identifier labels:
order number; tracking number; and SKU (Stock Keeping Unit).
[0071 ] In some examples, the computer apparatus identifies candidate identifier field tokens (e.g., non-decimal numeric and alphanumeric strings) in the training set 102 of electronic messages. For each candidate identifier, the computer apparatus trains the identifier classifier 108 to classify the candidate identifier based on features that include (i) a token extracted from the selected electronic message that corresponds to a static structural element of the respective grammar that immediately precedes the identifier field token in the selected electronic message, and (ii)
characteristics of the identifier field token. In some examples, the computer apparatus breaks the static token sequence preceding a particular candidate identifier into two- word phrases (including special character words demarcating the beginning and end of the sequence, such as <start> and <end>) that are used as the features for training the identifier classifier to label that particular price with the assigned label from the identifier taxonomy. In addition, the computer apparatus uses characteristics of the candidate identifier field token, including the symbol length of the candidate identifier, the percentage of numeric symbols (also referred to as digits) in the candidate identifier, the location of the candidate identifier in the electronic message (e.g., in the subject field in the header of the electronic message, at the top of the body of the electronic message, or at the bottom of the body of the electronic message). During the training process, the identifier classifier 108 automatically learns the weights to assign to the features based on the training data. In some examples, the identifier classifier is 108 trained according to a logistic regression training process.
[0072] The item description classifier 1 10 is a machine learning classifier that is trained to label respective ones of the extracted field tokens as an item description. In some examples, the computer apparatus identifies candidate item description field tokens (e.g., word phrase symbol strings) in the training set 102 of electronic messages. For each candidate item description, the computer apparatus trains the classifier to classify the candidate identifier based on features that include, for example: the percentage of phrases that the candidate item description has in common with a known item description (e.g., an item description in a database of products descriptions, such as a list of products previously purchased by the recipient of the electronic message or a product catalogue associated with the merchant associated with the electronic message); the percentage of phrases that the candidate item description has in common with a compilation of phrases that are known to not be part of product descriptions (e.g., identifier related phrases, such as "Order No.", and order-level price related phrases, such as "Total Price", are examples of phrases that typically are included in the compilation as not corresponding to item descriptions); and the percentage of capitalized symbols in the candidate item description field tokens. During the training process, the item description classifier 1 10 automatically learns the weights to assign to the features based on the training data. In some examples, the item description classifier 1 10 is trained according to a logistic regression training process. [0073] In some examples, in addition to building the price classifier 106, the identifier classifier 108, and the item description classifier 1 10, the computer apparatus also applies heuristics to classify candidate item-level quantity field tokens and candidate item-level price field tokens. An example of an item-level quantity
classification heuristic is the magnitude of the numeric field token in an iterating section of an electronic message. An example of an item-level price classification heuristic is a phrase of one or more words (e.g., "item price") that appears in a static token sequence that precedes a candidate price field token in an iterating section of an electronic message.
III. PARSING ELECTRONIC MESSAGES WITH A STRUCTURE LEARNING PARSER
[0074] FIG. 7 shows a method of by which an example of a structure learning parser extracts product purchase information from an electronic message. In the illustrated examples, computer apparatus is programmed to perform the method of FIG. 7.
[0075] In accordance with the method of FIG. 7, the computer apparatus matches a selected electronic message to one of multiple clusters of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar defining an arrangement of structural elements of electronic messages in the cluster (FIG. 7, block 150). The computer apparatus segments the selected electronic message into tokens that include product purchase information (FIG. 7, block 152). The computer apparatus parses the tokens in accordance with the grammar associated with the cluster matched to the selected electronic message, where the parsing process includes identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar and extracting unidentified ones of the tokens as field tokens (FIG. 7, block 154). The computer apparatus determines classification features of the selected electronic message (FIG. 7, block 156). The computer apparatus classifies respective ones of the extracted field tokens with respective product purchase relevant labels based on respective ones of the
determined features (FIG. 7, block 158). In non-transitory computer-readable memory, the computer apparatus typically stores associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens in one or more data structures (e.g., a database) permitting computer-based generation of actionable purchase history information.
[0076] In some examples, the structure learner parser includes a product purchase information token parser that performs the matching, segmenting, and parsing operations of blocks 150-154 of FIG. 7, and a product purchase information token classifier that performs the determining and classifying operations of blocks 156-158 of FIG. 7. In some examples, a software application that programs a computer to perform the matching, segmenting, and parsing operations of blocks 150-154 implements the product purchase information token parser, where a different respective software module includes a respective set of computer-readable instructions for performing the matching, segmenting, and parsing operations. In some examples, a machine learning software application that programs a computer to perform the determining and classifying operations of blocks 156-158 implements the product purchase information token classifier.
[0077] FIG. 8 shows a flow diagram of an example of the electronic message parsing process of FIG. 7.
[0078] In this example, the computer apparatus retrieves from a data store (e.g., a database) electronic messages 160 that have been transmitted between physical network nodes to convey product purchase information to designated recipients. FIG. 9A shows an example 161 of one of the electronic messages 160.
[0079] The computer apparatus pre-processes the electronic messages 160 (FIG. 8, block 162). In this process, the computer apparatus tokenizes the text-based contents of the electronic messages by extracting contiguous strings of symbols (e.g., symbols representing alphanumeric characters) separated by white spaces. The contiguous symbol strings typically correspond to words and numbers. The computer apparatus then replaces tokens that match patterns for integers and real numbers (typically prices) in the electronic messages 160 with wildcard tokens. FIG. 9B shows an example of a pre-processed version 163 of the electronic message 161 in which integers have been replaced with the wildcard token "INT" and real numbers have been replaced with the wildcard token "FLOAT". The replacement of the variable integer and real number elements of each electronic message with wildcard tokens improves the detection of iterating elements of the electronic messages. [0080] For each of the pre-processed messages 164 (FIG. 8, block 166), the computer apparatus attempts to determine a merchant that is associated with the electronic message (FIG. 8, block 168). For some types of electronic messages, the computer apparatus attempts to determine the merchant from header information that includes supplemental information about the electronic message. For example, an electronic mail (e-mail) message includes header information that indicates the sender, the recipient, and the subject of the electronic mail message, and a text message typically includes a Sender ID that indicates the sender of the message. In some cases, the computer apparatus may be able to determine the merchant from the sender or subject contained in the header information. In some cases, the computer apparatus may attempt to determine the merchant from the content of the electronic message.
[0081 ] Based on the determined merchant that is identified as being
associated with a respective one of the electronic messages, the computer apparatus attempts to match the electronic message to one of multiple clusters of electronic messages 170 that is associated with the determined merchant. In some examples, the set 170 of clusters corresponds to one of the merchant-specific sets 94, 96 of electronic message clusters into which the electronic messages 84 were grouped in the structure learning parser building process described above in connection with FIG. 5.
[0082] After determining the set 170 of clusters of electronic messages that is associated with the merchant associated with the electronic message, the computer apparatus matches the electronic message to a respective one of the clusters in the set 170 of clusters (FIG. 8, block 172). In some examples, the computer apparatus associating each of the clusters in the determined set 170 with a respective similarity score that indicates a degree of similarity between contents of the selected electronic message and contents of the electronic messages of the cluster. The computer apparatus then matches the electronic message to the cluster 174 in the set 170 that is associated with a highest one of the similarity scores.
[0083] In some examples, each similarity score compares similarity and diversity of the contents of the electronic message and contents of a respective one of the electronic messages of the associated cluster. In some examples, measures of content similarity compare similarity and diversity of the contents of pairs of electronic messages. In some of these examples, the similarity measure corresponds to the Jaccard similarity coefficient, which measures similarity between two electronic messages based on the size of the intersection divided by the size of the union of features of the electronic messages. In some of these examples, the computer apparatus extracts lines of content (i.e., whole lines, as opposed to individual words in the lines) from each electronic message as the features that are compared, and measures similarities between electronic messages using line-based comparisons of the extracted content. This line-based feature matching approach improves the accuracy of the clustering process by narrowing the range of matches between electronic messages.
[0084] As explained above, each cluster in the matched merchant-specific set 170 of clusters is associated with a respective grammar that defines an arrangement of structural elements of electronic messages in the cluster. Based on this association, the computer apparatus determines the grammar 176 that is associated with the cluster 174 that is matched to the electronic message. FIG. 9C shows an example of the grammar 176 that is matched to the electronic message. In the illustrated example, the grammar 176 corresponds to the grammar 130 shown in FIG. 6D. As explained above, the grammar preserves the arrangement (e.g., order) of the static elements 132, 134, 136, the optional elements, the iterating elements 138, and the data fields 140, 142, 144. In some examples, the grammar recursively defines allowable arrangements of the tokens corresponding to the structural elements.
[0085] After determining the grammar 176 that is associated with the cluster 174 that is matched to the electronic message, the computer apparatus parses the electronic message according to the determined grammar 176 (FIG. 8, block 178). In this process, the computer apparatus matches the sequence of structural elements in the grammar to the tokens identified in the pre-processed version of the electronic message. The result is an ordered arrangement of tokens 224 matched to respective ones of the structural elements of the grammar and a set of unidentified ones of the tokens that are extracted as data fields.
[0086] FIG. 9D shows an example of an abstract syntax tree 180 (AST) of structural elements 34, 36, 52, and 58 (which correspond to the structural elements shown in FIG. 3) and data fields 182, 184, 186 that have been parsed from the pre- processed electronic message 163 of FIG. 9B according to the grammar 176 of FIG. 9C. [0087] FIG. 9E shows an example of a visualization 182 of the electronic message 161 of FIG. 9A showing data fields 184-222 that have been parsed from the pre-processed electronic message 163 of FIG. 9B as a result of traversing the syntax tree 180 of FIG. 9D and extracting the unidentified ones of the tokens that do not match any of the structural elements in the grammar as data fields.
[0088] Referring back to FIG. 8, in addition to parsing tokens in the electronic message according to the grammar (FIG. 8, block 178), the computer apparatus also determines a respective set of additional features from each electronic message (FIG. 8, block 226). The determined features correspond to the features that are extracted during the training process described above.
[0089] After the tokens have been parsed and the additional features have been extracted from the pre-processed version of the electronic message (FIG. 8, blocks 178, 226), the computer apparatus applies respective sets of the parsed tokens and extracted features to the order-level price classifier 106, the identifier classifier 108, the item description classifier 1 10, and the item-level classification heuristics 228 described above. In the illustrated examples, the price classifier 106 labels the extracted candidate price data field tokens with respective ones the following order-level price labels: shipping; tax; total; sub-total; and discount. In the illustrated examples, the identifier classifier 108 labels respective ones of the extracted candidate price data field tokens with respective ones of the following identifier labels: order number; tracking number; and SKU. In the illustrated examples, the item description classifier labels respective ones of the extracted data field tokens as item descriptions. In the illustrated examples, the computer apparatus applies the item-level classification heuristics to label respective ones of the extracted data field tokens item-level quantity and price labels.
[0090] After classification, the computer apparatus outputs an extracted set of price data, identifier data, item description data, and item-level quantity and price data for each electronic message. The computer apparatus typically stores this product purchase information in non-transitory computer-readable memory. For example, the product purchase information may be stored in one or more data structures that include associations between the product purchase relevant labels and the product purchase information of the respective ones of the extracted product purchase data field tokens. D. EXTRACTED PRODUCT PURCHASE INFORMATION APPLICATIONS
[0091 ] The extracted product purchase information may be used in a wide variety of useful and tangible real-world applications. For example, for individual users, the extracted product purchase information is processed, for example, to display information about the users' purchases, including information for tracking in-transit orders, information for accessing purchase details, and aggregate purchase summary information. For advertisers, the extracted product purchase information is processed, for example, to assist in targeting advertising to consumers based on their purchase histories. For market analysts, the extracted product purchase information is processed to provide, for example, anonymous item-level purchase detail across retailers, categories, and devices.
[0092] FIG. 10 shows an example of a graphical user interface 230 presenting a set of product purchase information for a particular consumer (i.e., Consumer A). In this example, product purchase information for a set of products purchased by
Consumer A is present by product in reverse chronological order by order date to provide the purchase history for Consumer A. The product purchase information includes Order Date, Item Description, Price, Merchant, and Status. This presentation of product purchase information allows Consumer A to readily determine information about the products in the purchase history, such as prices paid and delivery status. In this way, Consumer A is able to readily determine what he bought, where he bought it, and when it will arrive without having to review the original electronic messages (e.g., e- mail messages) containing the product purchase information.
[0093] Other exemplary applications of the extracted product purchase information are described in, for example, U.S. Patent Publication No. 20130024924 and U.S. Patent Publication No. 20130024525.
3. EXEMPLARY COMPUTER APPARATUS
[0094] Computer apparatus are specifically programmed (e.g., by a computer software application) to provide improved processing systems for performing the functionality of the methods described herein. In some examples, the process of building a structure learning parser and the process of parsing electronic messages with a structure learning parser are performed by separate and distinct computer apparatus. In other examples, the same computer apparatus performs these processes. [0095] FIG. 1 1 shows an exemplary embodiment of computer apparatus that is implemented by a computer system 320. The computer system 320 includes a processing unit 322, a system memory 324, and a system bus 326 that couples the processing unit 322 to the various components of the computer system 320. The processing unit 322 may include one or more data processors, each of which may be in the form of any one of various commercially available computer processors. The system memory 324 includes one or more computer-readable media that typically are associated with a software application addressing space that defines the addresses that are available to software applications. The system memory 324 may include a read only memory (ROM) that stores a basic input/output system (BIOS) that contains startup routines for the computer system 320, and a random access memory (RAM). The system bus 326 may be a memory bus, a peripheral bus or a local bus, and may be compatible with any of a variety of bus protocols, including PCI, VESA, MicroChannel, ISA, and EISA. The computer system 320 also includes a persistent storage memory 328 (e.g., a hard drive, a floppy drive, a CD ROM drive, magnetic tape drives, flash memory devices, and digital video disks) that is connected to the system bus 326 and contains one or more computer-readable media disks that provide non-volatile or persistent storage for data, data structures and computer-executable instructions.
[0096] A user may interact (e.g., input commands or data) with the computer system 320 using one or more input devices 330 (e.g. one or more keyboards, computer mice, microphones, cameras, joysticks, physical motion sensors, and touch pads). Information may be presented through a graphical user interface (GUI) that is presented to the user on a display monitor 332, which is controlled by a display controller 334. The computer system 320 also may include other input/output hardware (e.g., peripheral output devices, such as speakers and a printer). The computer system 320 connects to other network nodes through a network adapter 336 (also referred to as a "network interface card" or NIC).
[0097] A number of program modules may be stored in the system memory 324, including application programming interfaces 338 (APIs), an operating system (OS) 340 (e.g., the Windows® operating system available from Microsoft Corporation of Redmond, Washington U.S.A.), software applications 341 including one or more software applications programming the computer system 320 to perform one or more of the process of building a structure learning parser and the process of parsing electronic messages with a structure learning parser, drivers 342 (e.g., a GUI driver), network transport protocols 344, and data 346 (e.g., input data, output data, program data, a registry, and configuration settings).
[0098] In some embodiments, the one or more server network nodes of the product providers 18, 42, and the recommendation provider 44 are implemented by respective general-purpose computer systems of the same type as the client network node 320, except that each server network node typically includes one or more server software applications.
[0099] In other embodiments, one or more of the product purchase information provider 12, the product merchants 14, the product delivery providers 16, the message providers 18, and the product purchase information consumers 20 shown in FIG. 1 are implemented by server network nodes that correspond to the computer apparatus 320.
4. CONCLUSION
[00100] The embodiments described herein provide improved systems, methods, and computer-readable media for extracting product purchase information from electronic messages.
[00101 ] Other embodiments are within the scope of the claims.

Claims

1 . A method, the method comprising
by computer apparatus:
matching (172) a selected electronic message (166) to one of multiple clusters (170) of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar (176) defining an arrangement of structural elements of electronic messages in the cluster;
segmenting the selected electronic message (166) into tokens comprising product purchase information;
parsing (178) the tokens in accordance with the grammar (176) associated with the cluster matched to the selected electronic message (166), wherein the parsing (178) comprises identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar (176) and extracting unidentified ones of the tokens as field tokens (224);
determining (226) classification features of the selected electronic
message (166);
classifying (106, 108, 1 10) respective ones of the extracted field tokens (224) with respective product purchase relevant labels based on respective ones of the determined features; and
in non-transitory computer-readable memory (328, 324), storing
associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens (224) in one or more data structures permitting computer-based generation of actionable purchase history information.
2. The method of claim 1 , wherein the matching (172) comprises:
identifying a message sender of the selected electronic message (166); ascertaining ones of the clusters (170) of electronic messages associated with the identified message sender;
with each of the ascertained clusters (170), associating a respective similarity score indicating a degree of similarity between contents of the selected electronic message (166) and contents of the electronic messages of the ascertained cluster; and matching (172) the selected electronic message (166) to the ascertained cluster associated with a highest one of the similarity scores.
3. The method of claim 2, wherein each similarity score compares similarity and diversity of the contents of the selected electronic message (166) and contents of a respective one of the electronic messages of the associated cluster.
4. The method of claim 1 , wherein the segmenting comprises identifying strings of contiguous one or more symbols separated by blank spaces, and extracting each of the identified strings as a respective token.
5. The method of claim 1 , wherein the grammar (176) recursively defines allowable arrangements of the tokens corresponding to the structural elements, and the parsing (178) comprises identifying sequences of one or more of the tokens
corresponding to respective ones of the allowable arrangements as extracted structural elements.
6. The method of claim 1 , wherein the classifying (106, 108, 1 10) comprises classifying (106, 108, 1 10) each of respective ones of the extracted field tokens (224) as one of: a price in a predetermined price classification taxonomy; an identifier in a predetermined identifier classification taxonomy; and an item description.
7. The method of claim 1 , wherein the structural elements defined in the grammar (176) comprise static structural elements each of which appears in all of the electronic messages in the respective cluster.
8. The method of claim 7, wherein: the parsing (178) comprises ascertaining price related ones of the field tokens (224) that correspond to prices;
the determining (226) comprises choosing as price classification features respective ones of the identified tokens corresponding to static structural elements that immediately precede the ascertained price related field tokens (224); and
the classifying (106, 108, 1 10) comprises labeling the price related field tokens (224) based on the respective price classification features.
9. The method of claim 8, wherein the price classification features comprise words in the respective ones of the identified tokens corresponding to static structural elements that immediately precede the ascertained price related field tokens (224).
10. The method of claim 8, wherein the labeling of the price related field tokens (224) is based on machine learning classification.
1 1 . The method of claim 7, wherein:
the parsing (178) comprises ascertaining identifier related ones of the field tokens (224) that correspond to identifiers;
the determining (226) comprises choosing as identifier classification features respective ones of the identified tokens corresponding to static structural elements that immediately precede the ascertained identifier related field tokens (224); and
the classifying (106, 108, 1 10) comprises labeling the identifier related field tokens (224) based on the respective identifier classification features. 12. The method of claim 1 1 , wherein the identifier classification features comprise: words in the respective ones of the identified tokens corresponding to structural elements that immediately precede the ascertained identifier related field tokens (224); and characteristics of the ascertained identifier related field tokens (224). 13. The method of claim 1 1 , wherein the labeling of the identifier related field tokens (224) is based on machine learning classification.
The method of claim 1 , wherein the parsing (178) comprises ascertaining item description related ones of the field tokens (224) that correspond to item descriptions;
the determining (226) comprises choosing as item description classification features characteristics of the ascertained item description related field tokens (224); and
the classifying (106, 108, 1 10) comprises labeling the item description related field tokens (224) based on the respective item description classification features.
15. The method of claim 1 , wherein the structural elements defined in the grammar (176) comprise iterating static structural elements each of which appears in all of the electronic messages in the respective cluster and repeats in at least some of the electronic messages in the respective cluster.
16. The method of claim 15, wherein:
the parsing (178) comprises ascertaining iterator related ones of the field tokens
(224) that are associated with iterating ones of the identified tokens that correspond to respective ones of the iterating static structural elements defined in the grammar (176); the determining (226) comprises choosing as iterator classification features respective ones of the identified tokens corresponding to iterating static structural elements adjacent the ascertained iterator related field tokens (224); and
the classifying (106, 108, 1 10) comprises labeling the iterator related field tokens (224) based on the respective iterator classification features.
17. The method of claim 16, wherein the classifying (106, 108, 1 10) comprises classifying (106, 108, 1 10) each of respective ones of the iterator related field tokens
(224) as one of: a price; a quantity; and an item description.
18. The method of claim 1 , wherein the one or more data structures permit computer-based generation of commercially relevant data sets in response to respective queries.
19. The method of claim 1 , further comprising, by computer apparatus, generating product purchase summary information based on the stored associations between the product purchase relevant labels and the product purchase infornnation of the extracted field tokens (224), and transmitting data for displaying a visualization of the product purchase summary information on a physical network node.
20. Apparatus, comprising:
a memory (328, 324) storing processor-readable instructions; and
a processor coupled to the memory (328, 324), operable to execute the instructions, and based at least in part on the execution of the instructions operable to perform operations comprising
matching (172) a selected electronic message (166) to one of multiple clusters (170) of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar (176) defining an arrangement of structural elements of electronic messages in the cluster;
segmenting the selected electronic message (166) into tokens comprising product purchase information;
parsing (178) the tokens in accordance with the grammar (176) associated with the cluster matched to the selected electronic message (166), wherein the parsing (178) comprises identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar (176) and extracting unidentified ones of the tokens as field tokens (224);
determining (226) classification features of the selected electronic
message (166);
classifying (106, 108, 1 10) respective ones of the extracted field tokens (224) with respective product purchase relevant labels based on respective ones of the determined features; and in non-transitory computer-readable memory (328, 324), storing
associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens (224) in one or more data structures permitting computer-based generation of actionable purchase
history information.
21 . At least one non-transitory computer-readable medium having processor- readable program code embodied therein, the processor-readable program code adapted to be executed by a processor to implement a method comprising:
matching (172) a selected electronic message (166) to one of multiple clusters (170) of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar (176) defining an arrangement of structural elements of electronic messages in the cluster;
segmenting the selected electronic message (166) into tokens comprising product purchase information;
parsing (178) the tokens in accordance with the grammar (176) associated with the cluster matched to the selected electronic message (166), wherein the parsing (178) comprises identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar (176) and extracting unidentified ones of the tokens as field tokens (224);
determining (226) classification features of the selected electronic message (166);
classifying (106, 108, 1 10) respective ones of the extracted field tokens (224) with respective product purchase relevant labels based on respective ones of the determined features; and
in non-transitory computer-readable memory (328, 324), storing associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens (224) in one or more data structures permitting computer-based generation of actionable purchase history information. 22. Apparatus, comprising:
a product purchase information token parser for
matching (172) a selected electronic message (166) to one of multiple clusters (170) of electronic messages transmitted between physical network nodes to convey product purchase information to designated recipients, each cluster being associated with a respective grammar (176) defining an arrangement of structural elements of electronic messages in the cluster, segmenting the selected electronic message (166) into tokens comprising product purchase information, and
parsing (178) the tokens in accordance with the grammar (176) associated with the cluster matched to the selected electronic message (166), wherein the parsing (178) comprises identifying ones of the tokens that correspond to respective ones of the structural elements defined in the grammar (176) and extracting unidentified ones of the tokens as field tokens (224); and
a product purchase information token classifier for
determining (226) classification features of the selected electronic
message (166);
classifying (106, 108, 1 10) respective ones of the extracted field tokens (224) with respective product purchase relevant labels based on respective ones of the determined features; and non-transitory computer-readable memory (328, 324) storing associations between the product purchase relevant labels and the product purchase information corresponding to the respective ones of the extracted field tokens (224) in one or more data structures permitting computer-based generation of actionable purchase history information.
23. A method, comprising
by computer apparatus (320):
grouping (92) electronic messages (80) into respective clusters (94, 96) based on similarities between the electronic messages (80), the electronic messages (80) having been transmitted between physical network nodes to convey product purchase information to
designated recipients; for each cluster (94, 96), extracting a respective grammar (100) defining an arrangement of structural elements of the electronic messages (80) in the cluster (94, 96);
based on training data (104) comprising fields of electronic messages
comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, building a classifier (106, 108, 1 10) that classifies fields of a selected electronic message comprising product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars (100) matched to the selected electronic message; and
in non-transitory computer-readable memory (324, 328), storing the
grammars (100) and the classifier (106, 108, 1 10) in one or more data structures permitting computer-based parsing of product purchase information from electronic messages (80).
24. The method of claim 23, wherein the grouping (92) comprises:
for each of the electronic messages (80), identifying a message sender of the electronic message;
sorting the electronic messages (80) into groups by message sender, wherein each message sender is associated with a respective one of the groups of electronic messages (80); and
for each group, clustering the electronic messages (80) within the group into one or more clusters (94, 96) based on similarities between the electronic messages (80).
25. The method of claim 24, wherein the clustering comprises clustering electronic messages (80) based on measures of content similarity between pairs of the electronic messages (80).
26. The method of claim 25, wherein the applying comprises: for each electronic message, extracting content from lines of the electronic message; and
measuring similarities between electronic messages (80) based on line-based comparisons of the extracted content.
27. The method of claim 25, wherein the measures of content similarity compare similarity and diversity of the contents of the selected electronic message and contents of a respective one of the electronic messages (80) of the associated cluster (94, 96).
28. The method of claim 23, wherein the extracting comprises, for each cluster (94, 96):
building a respective suffix tree representation of contents of the electronic messages (80) in the cluster (94, 96); and
ascertaining the arrangement of structural elements of the electronic messages (80) in the cluster (94, 96) based on the respective suffix tree representation.
29. The method of claim 28, wherein, for each cluster (94, 96):
the suffix tree representation comprises a list of substrings of symbols contained in the contents of the electronic messages (80) in the cluster (94, 96), and indications of the occurrence frequencies of the substrings; and
the ascertaining comprises designating respective ones of the substrings as structural elements of the electronic messages (80) in the cluster (94, 96) based on the respective occurrence frequencies of the substrings.
30. The method of claim 29, wherein, for each cluster (94, 96):
the ascertaining comprises labeling respective ones of the substrings that appear in all the electronic messages (80) of the cluster (94, 96) as static structural elements.
31 . The method of claim 29, wherein, for each cluster (94, 96):
the ascertaining comprises labeling respective ones of the substrings that appear in all the electronic messages (80) of the cluster (94, 96) and repeat in at least some of the electronic messages (80) of the cluster (94, 96) as iterating static structural elements.
32. The method of claim 29, wherein, for each cluster (94, 96):
the ascertaining comprises labeling respective ones of the substrings that appear in a majority but less than all of the electronic messages (80) of the cluster (94, 96) as optional structural elements.
33. The method of claim 29, further comprising denominating non-structural- element ones of the substrings as structural elements as respective non-structural fields of the electronic messages (80) in the cluster (94, 96).
34. The method of claim 23, wherein ones of the features correspond to content extracted from lines of the training electronic messages (80).
35. The method of claim 23, wherein the building comprises training the classifier (106, 108, 1 10) to classify price related field tokens in the selected electronic message with a respective price label in a predetermined price field labeling taxonomy. 36. The method of claim 23, wherein the building comprises training the classifier (106, 108, 1 10) to classify identifier related field tokens in the selected electronic message with a respective identifier label in a predetermined identifier field labeling taxonomy. 37. The method of claim 36, wherein the training comprises training the classifier (106, 108, 1 10) to classify an identifier related field token extracted from the selected electronic message based on features comprising (i) a token extracted from the selected electronic message that corresponds to a static structural element of the respective grammar (100) that immediately precedes the identifier related field token in the selected electronic message, and (ii) characteristics of the identifier related field token.
38. The method of claim 23, wherein the building comprises training the classifier (106, 108, 1 10) to classify item description related field tokens in the selected electronic message as item descriptions based on comparisons between features of the item description related field tokens and features of known item descriptions.
39. The method of claim 23, further comprising
by the computer apparatus (320), extracting tokens from an electronic message according to a respective one of the grammars (100) and applying the classifier (106, 108, 1 10) to label respective ones of the extracted tokens.
40. Apparatus, comprising:
a memory (324, 328) storing processor-readable instructions; and
a processor coupled to the memory (324, 328), operable to execute the
instructions, and based at least in part on the execution of the instructions operable to perform operations comprising
grouping (92) electronic messages (80) into respective clusters (94, 96) based on similarities between the electronic messages (80), the electronic messages (80) having been transmitted between physical network nodes to convey product purchase information to
designated recipients;
for each cluster (94, 96), extracting a respective grammar (100) defining an arrangement of structural elements of the electronic messages (80) in the cluster (94, 96);
based on training data (104) comprising fields of electronic messages (80) comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, building a classifier (106, 108, 1 10) that classifies fields of a selected electronic message comprising product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars (100) matched to the selected electronic message; and in non-transitory computer-readable memory (324, 328), storing the
grammars (100) and the classifier (106, 108, 1 10) in one or more data structures permitting computer-based parsing of product purchase information from electronic messages (80).
41 . At least one non-transitory computer-readable medium having processor- readable program code embodied therein, the processor-readable program code adapted to be executed by a processor to implement a method comprising:
grouping (92) electronic messages (80) into respective clusters (94, 96) based on similarities between the electronic messages (80), the electronic messages (80) having been transmitted between physical network nodes to convey product purchase information to designated recipients;
for each cluster (94, 96), extracting a respective grammar (100) defining an arrangement of structural elements of the electronic messages (80) in the cluster (94, 96);
based on training data (104) comprising fields of electronic messages (80) comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, building a classifier (106, 108, 1 10) that classifies fields of a selected electronic message comprising product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars (100) matched to the selected electronic message; and
in non-transitory computer-readable memory (324, 328), storing the grammars (100) and the classifier (106, 108, 1 10) in one or more data structures permitting computer-based parsing of product purchase information from electronic messages (80).
42. Apparatus, comprising:
a product purchase information grammar extractor for
grouping (92) electronic messages (80) into respective clusters (94, 96) based on similarities between the electronic messages (80), the electronic messages (80) having been transmitted between physical network nodes to convey product purchase information to
designated recipients;
for each cluster (94, 96), extracting a respective grammar (100) defining an arrangement of structural elements of the electronic messages (80) in the cluster (94, 96);
a product purchase information token classifier (106, 108, 1 10) trainer for
based on training data (104) comprising fields of electronic messages (80) comprising product purchase information that are labeled with product purchase relevant labels in a predetermined field labeling taxonomy, building a classifier (106, 108, 1 10) that classifies fields of a selected electronic message comprising product purchase information with respective ones of the product purchase relevant labels based on respective associations between tokens extracted from the selected electronic message and the structural elements of a respective one of the grammars (100) matched to the selected electronic message; and
non-transitory computer-readable memory (324, 328) storing the grammars (100) and the classifier (106, 108, 1 10) in one or more data structures permitting computer- based parsing of product purchase information from electronic messages (80).
PCT/US2015/056013 2014-10-21 2015-10-16 Extracting product purchase information from electronic messages WO2016064679A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14/519,919 2014-10-21
US14/519,975 US9875486B2 (en) 2014-10-21 2014-10-21 Extracting product purchase information from electronic messages
US14/519,919 US9563904B2 (en) 2014-10-21 2014-10-21 Extracting product purchase information from electronic messages
US14/519,975 2014-10-21

Publications (1)

Publication Number Publication Date
WO2016064679A1 true WO2016064679A1 (en) 2016-04-28

Family

ID=55761353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/056013 WO2016064679A1 (en) 2014-10-21 2015-10-16 Extracting product purchase information from electronic messages

Country Status (1)

Country Link
WO (1) WO2016064679A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055718B2 (en) 2012-01-12 2018-08-21 Slice Technologies, Inc. Purchase confirmation data extraction with missing data replacement
US11032223B2 (en) 2017-05-17 2021-06-08 Rakuten Marketing Llc Filtering electronic messages
US11562610B2 (en) 2017-08-01 2023-01-24 The Chamberlain Group Llc System and method for facilitating access to a secured area
US11574512B2 (en) 2017-08-01 2023-02-07 The Chamberlain Group Llc System for facilitating access to a secured area
US11803883B2 (en) 2018-01-29 2023-10-31 Nielsen Consumer Llc Quality assurance for labeled training data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120203632A1 (en) * 2011-02-07 2012-08-09 Marc Blum Tracking and summarizing purchase information
US20120330971A1 (en) * 2011-06-26 2012-12-27 Itemize Llc Itemized receipt extraction using machine learning
US20130024525A1 (en) * 2011-07-19 2013-01-24 Project Slice Inc. Augmented Aggregation of Emailed Product Order and Shipping Information
US20130024282A1 (en) * 2011-07-23 2013-01-24 Microsoft Corporation Automatic purchase history tracking
US20140105508A1 (en) * 2012-10-12 2014-04-17 Aditya Arora Systems and Methods for Intelligent Purchase Crawling and Retail Exploration

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120203632A1 (en) * 2011-02-07 2012-08-09 Marc Blum Tracking and summarizing purchase information
US20120330971A1 (en) * 2011-06-26 2012-12-27 Itemize Llc Itemized receipt extraction using machine learning
US20130024525A1 (en) * 2011-07-19 2013-01-24 Project Slice Inc. Augmented Aggregation of Emailed Product Order and Shipping Information
US20130024282A1 (en) * 2011-07-23 2013-01-24 Microsoft Corporation Automatic purchase history tracking
US20140105508A1 (en) * 2012-10-12 2014-04-17 Aditya Arora Systems and Methods for Intelligent Purchase Crawling and Retail Exploration

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10055718B2 (en) 2012-01-12 2018-08-21 Slice Technologies, Inc. Purchase confirmation data extraction with missing data replacement
US11032223B2 (en) 2017-05-17 2021-06-08 Rakuten Marketing Llc Filtering electronic messages
US11562610B2 (en) 2017-08-01 2023-01-24 The Chamberlain Group Llc System and method for facilitating access to a secured area
US11574512B2 (en) 2017-08-01 2023-02-07 The Chamberlain Group Llc System for facilitating access to a secured area
US11941929B2 (en) 2017-08-01 2024-03-26 The Chamberlain Group Llc System for facilitating access to a secured area
US11803883B2 (en) 2018-01-29 2023-10-31 Nielsen Consumer Llc Quality assurance for labeled training data

Similar Documents

Publication Publication Date Title
US9892384B2 (en) Extracting product purchase information from electronic messages
US9875486B2 (en) Extracting product purchase information from electronic messages
US20210103964A1 (en) Account manager virtual assistant using machine learning techniques
US20210256574A1 (en) Method and system for programmatic analysis of consumer reviews
US11032223B2 (en) Filtering electronic messages
US20110040631A1 (en) Personalized commerce system
WO2016064679A1 (en) Extracting product purchase information from electronic messages
US20220382794A1 (en) System and method for programmatic generation of attribute descriptors
US9384497B2 (en) Use of SKU level e-receipt data for future marketing
CN103530794A (en) Content-based bidding in online advertising
CN103530793A (en) Content-based targeted online advertisement
US20240062264A1 (en) Ai- backed e-commerce for all the top rated products on a single platform
US20230100685A1 (en) Financial product information collecting platform system, financial product information collecting method, and computer program for the same
CN112784021A (en) System and method for using keywords extracted from reviews
JP2020052771A (en) Sorting data generation system
US20220092634A1 (en) Information processing device and non-transitory computer readable medium
Chen et al. Application of EWOM to Service Quality Management of Electronic Commerce
CN113886450A (en) User matching method, related device, equipment and storage medium
Lindgren Recognizing names out of a string field: sanitation of invoicing data
WO2022164626A1 (en) Audience recommendation using node similarity in combined contextual graph embeddings

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15851963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15851963

Country of ref document: EP

Kind code of ref document: A1