WO2019118927A2 - Conversational intelligence architecture system - Google Patents
Conversational intelligence architecture system Download PDFInfo
- Publication number
- WO2019118927A2 WO2019118927A2 PCT/US2018/065846 US2018065846W WO2019118927A2 WO 2019118927 A2 WO2019118927 A2 WO 2019118927A2 US 2018065846 W US2018065846 W US 2018065846W WO 2019118927 A2 WO2019118927 A2 WO 2019118927A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- sales
- analysis
- appendix
- user
- Prior art date
Links
- 238000004458 analytical method Methods 0.000 claims abstract description 151
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000012545 processing Methods 0.000 claims abstract description 20
- 238000012800 visualization Methods 0.000 claims description 21
- 230000009471 action Effects 0.000 claims description 13
- 230000007717 exclusion Effects 0.000 claims description 7
- 238000003058 natural language processing Methods 0.000 claims description 6
- 230000002776 aggregation Effects 0.000 claims description 4
- 238000004220 aggregation Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 abstract description 9
- 239000000047 product Substances 0.000 description 158
- 230000000052 comparative effect Effects 0.000 description 46
- 238000013459 approach Methods 0.000 description 13
- 239000006227 byproduct Substances 0.000 description 11
- 230000004044 response Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 230000003993 interaction Effects 0.000 description 8
- 230000035515 penetration Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000007726 management method Methods 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 230000004931 aggregating effect Effects 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000003442 weekly effect Effects 0.000 description 3
- 206010011416 Croup infectious Diseases 0.000 description 2
- 101150044878 US18 gene Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 235000014510 cooky Nutrition 0.000 description 2
- 201000010549 croup Diseases 0.000 description 2
- 230000004069 differentiation Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000630 rising effect Effects 0.000 description 2
- 235000011888 snacks Nutrition 0.000 description 2
- 241001044684 Amadina fasciata Species 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 239000011449 brick Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 235000013402 health food Nutrition 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000001343 mnemonic effect Effects 0.000 description 1
- 239000004570 mortar (masonry) Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 235000014102 seafood Nutrition 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 235000013599 spices Nutrition 0.000 description 1
- 230000002195 synergetic effect Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 235000020795 whole food diet Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/903—Querying
- G06F16/9032—Query formulation
- G06F16/90332—Natural language query formulation or dialogue systems
Definitions
- the present application relates to computer and software systems, architectures, and methods for interfacing to a very large database which includes detailed transaction data, and more particularly to interfaces which support category managers and similar roles in retail operations.
- Retailers are struggling to achieve growth, especially in center store categories due to a decline in baskets and trips.
- Customers are becoming more and more omnichannel each day due to convenience, utilizing in-store and online delivery, and utilizing multi- store trips.
- Pressure is rising on retailers to blend the convenience of online with an enticing, convenient store: retailers need to drive full and loyal trips to ensure growth.
- Fill-in Trips are rising with the decline of destination trips, such that twenty fill-in trips might equate to only one destination
- Retailers are realizing that they can receive $$ by sharing their customers’ purchase data, and in many cases companies will even pay a premium up front for this data.
- a common problem faced by category managers and other high-level business users is that they: a) aren’t necessarily data analysts, and even if they are, b) they don’t have the time to do necessary analysis themselves, so c) they have to send data to data analysts for answers, who d) have a typical turnaround time of -2-10 days, when e) those answers are needed NOW!
- the present application teaches, among other innovations, new architecture (and systems and related methods) built around a conversational intelligence architecture system used for retail and CPG (Consumer Packaged Goods) management, especially (but not only) category management.
- CPG Consumer Packaged Goods
- There are a number of components in this architecture and correspondingly there are a number of innovative teachings disclosed in the present application. While these innovative teachings all combine synergistically, it should be noted that different innovations, and different combinations and subcombinations of these innovations, are all believed to be useful and nonobvious. No disclosed inventions, nor combinations thereof, are disclaimed nor relinquished in any way.
- Modem category management requires support for category managers to dig deeply into a large database of individual transactions. (Each transaction can e.g. correspond to one unique item on a cash register receipt.)
- the various disclosed inventions support detailed response to individual queries, and additionally provide rich enough data views to allow managers to“play around” with live data. This is a challenging interface requirement, particularly in combination with the need for near-real-time response.
- a“user” will typically (but not exclusively) be a category manager in a large retail or CPG operation, i.e. one with tens, hundreds, or thousands of physical locations, and thousands or more of distinct products. Other users (such as store or location managers or higher-level executives) often benefit from this interface, but the category manager is a primary driving instance.
- user queries are matched to one of a predetermined set of tailored analytical engines.
- the output of the natural language parser is used to select one of those preloaded query engines as a“lead” analysis engine.
- the lead analysis engine provides an initial output (preferably graphic) which represents a first-order answer to the parsed query, but this is not (nearly) the end of the process.
- the lead analysis engine ’s initial output is displayed to the user, and provides context supporting further interaction.
- the preloaded analytical engines are supported by a set of further-analysis modules.
- these further-analysis modules intelligently retrieve data from ones of multiple pre-materialized“data cubes,” and accordingly provide further expansion of the response.
- the further-analysis modules are run in parallel, and offered to the user in a rank order. This results in a degree of interactivity in query response.
- natural language queries are parsed, e.g. by a conventional parser, and immersive visualizations are used, as output, to immediately illuminate the response to a query.
- immersive visualizations are used, as output, to immediately illuminate the response to a query.
- SU BSTITUTE SH EET (RU LE 26) views” show images of the products on virtual shelves. These are preferably interactive, so that a user can“lift” a (virtual) product off the (virtual) shelf, and thereby see corresponding specific analyses. Note that there is a strong synergy between the natural-language input and the immersive-visualization output. The combination of these two user interfaces provides a fully usable system for users who are not inclined to quantitative thinking. At the same time, users who want to dig into quantitative relationships can easily do so. This versatility would not be available except by use of both of these user interface aspects.
- a further feature is the tie in to exogenous customer data.
- the “Customer360” module provides a deep insight into customer behaviors and differentiation. When this module is present, queries for which the knowledge of individual customers’ activities would be helpful can be routed to this module.
- Figure 1 shows an overview of operations in the preferred implementation of the disclosed inventions.
- Figure 2 combines an entity relationship diagram with indications of the information being transferred. This diagram schematically shows the major steps in getting from a natural language query to an immersive-environment image result.
- Figure 3 shows a different view of implementation of the information flows and processing of Figure 2.
- Figure 4A shows how the different analytical module outputs are ranked to provide "highlights" for the user to explore further. From such a list, users can select "interesting" threads very quickly indeed.
- Figures 4B and 4C show two different visualization outputs.
- Figure 5A is a larger version of Figure 4C, in which all the shelves of a store are imaged. Some data can be accessed from this level, or the user can click on particular aisles to zoom in.
- Figure 5B shows how selection of a specific product (defined by its UPC) opens a menu of available reports.
- Figure 6 shows how a geographic display can be used to show data over a larger area; in this example, three geographic divisions of a large retail chain are pulled up for comparison.
- Figure 7 shows how broad a range of questions can be parsed. This can include anything from quite specific database queries to fairly vague inquiries.
- Figure 8 shows some examples of the many types of possible queries.
- Figure 9 shows a specific example of a query, and the following figures ( Figures 10A, 10B, and 11) show more detail of its handling. This illustrates the system of Figure 2 in action.
- Figure 12 shows one sample embodiment of a high-level view of an analytical cycle according to the present inventions.
- Figure 13 shows a less-preferred (upper left) and more- preferred (lower right) ways of conveying important information to the user.
- Figure 14 shows an exemplary interpretive metadata structure.
- Figure 15 generally shows the data sets which are handled, with daily or weekly updating.
- the right side of this Figure shows how pre-aggregated data is generated offline, so that user queries can access these very large data sets with near-realtime responsiveness.
- Figure 16 shows some sample benefits and details of the Customer 360 modules.
- Figures 17A-17B show two different exemplary planogram views.
- the present application teaches new architecture and systems and related methods for a conversational intelligence architecture system used in a retail setting. There are a number of components in this architecture, and correspondingly there are a number of innovative teachings disclosed in the present application. While these innovative teachings all combine synergistically, it should be noted that different innovations, and different subcombinations of these innovations, are all believed to be useful and nonobvious. No disclosed inventions, nor combinations thereof, are disclaimed nor relinquished in any way.
- user queries are matched to one of a predetermined set of tailored analytical engines.
- 13 different analytical engines are present, but of course this number can be varied.
- the output of the natural language parser is used to select one of those preloaded query engines as a“lead” analysis module.
- the lead analysis engine provides a graphic output representing an answer to the parsed query, but this is not (nearly) the end of the process.
- the lead analysis engine ’s initial output is displayed to the user, and provides context supporting further interaction.
- the preloaded analytical engines are supported by a set of further-analysis modules.
- these further-analysis modules intelligently retrieve data (from the data cubes) and run analyses corresponding to further queries within the context of the output of the lead analysis engine.
- natural language queries are parsed, e.g. by a conventional parser, and immersive visualizations are used, as output, to immediately illuminate the response to a query.
- immersive visualizations are used, as output, to immediately illuminate the response to a query.
- “planogram views” as seen in e.g. Figures 17A-17B
- each further-analysis module is scored on its effect upon the initial query. These results are displayed in relevance order in the“intelligence insight”.
- a further feature is the tie-in to exogenous data, such as customer data.
- The“Customer360” module provides a deep insight into customer behaviors and differentiation. When this module is present, queries for which the knowledge of individual customers’ activities would be helpful can be routed to this module.
- This optional expansion uses an additional dataset AND analytical modules and knowledge base to provide a wider view of the universe.
- queries can be routed to the customer database or the sales database as appropriate.
- the “Customer360” module permits the introduction of e.g. other external factors not derivable from the transactional data itself.
- a full query cycle from user input to final output, can be e.g. as follows:
- the user inputs a query into the front-end interface in natural language
- NLP Natural Language Processing
- the lead analysis engine determines what data cube(s) are relevant to the query
- the fundamental data block(s) are translated into appropriate visualization(s) and displayed to the user at this point.
- the user can then select one or more desired further-analysis engines.
- the intent and scope include the selection of one or more further-analysis modules, and the fundamental data block(s) are returned later. In still other embodiments, this can be different.
- the fundamental data block(s) are passed to the further-analysis engine(s);
- the fundamental data block(s) are analyzed according to one or more sub-module metrics
- the further- analysis engine determines which result block(s) are most important to the query at hand (e.g., when using an outlier-based lead analysis engine, this can be the most significant outlier(s));
- One or more intelligence block(s) are populated based on the most important result block(s), and
- the lead analysis engine then returns the fundamental and intelligence blocks
- the fundamental and intelligence blocks are translated into natural language results, visualizations, and/or other means of usefully conveying information to the user, as appropriate;
- Data cubes are essentially one step up from raw transactional data, and are aggregated over one source.
- a primary source of raw data is transactional data, typically with one line of data per unique product per basket transaction. Ten items in a basket means ten lines of data, so that
- SU BSTITUTE SH EET (RU LE 26) two years of data for an exemplary store or group might comprise two billion lines of data or more.
- Data cubes are preferably calculated offline, typically when the source data is updated (which is preferably e.g. weekly).
- Data cubes store multiple aggregations of data above the primary transactional data, and represent different ways to answer different questions, providing e.g. easy ways to retrieve division totals by week.
- queries are preferably framed in terms of product, geography, time of sale, and customer.
- These pre-prepared groupings allow near-instantaneous answers to various data queries. Instead of having to search through e.g. billions of rows of raw data, these pre-materialized data cubes anticipate, organize, and prepare data according to the questions addressed by e.g. the relevant led analysis engine(s).
- FACTS is a mnemonic acronym which refers to: the Frequency of customer visits, Advocated Categories the customer buys, Total Spend of the customer; these elements give some overall understanding of customers’ engagement and satisfaction with a retailer. This data is used to compile one presently-preferred data cube.
- TmPriceTM looks at customer analysis, e.g. by identifying whether certain customers are more price-driven, more quality-driven, or price/quality neutral.
- Product includes sales sorted by e.g. total, major department, department category, subcategory, manufacturer, and brand.
- Time includes sales sorted by e.g. week, period, quarter, and year, as well as year-to-date and rolling periods of e.g. 4, 12, 13, 26, and 52 weeks.
- Geography/Geographical Consideration includes sales sorted by e.g. total, region, format, and store.
- Customer Segments includes sales as categorized according to data from e.g. FACTS, market segment, TruPrice, and/or retailer-specific customer segmentations.
- Assortment Understand which items are most important to Primary shoppers; Measure customer switching among brands; Measure and benchmark new products.
- Affinities Understand categories and brands that are purchased together to optimize merchandising.
- One sample embodiment can operate as follows, as seen in e.g.
- A) Preferably natural language queries are parsed, e.g. by a conventional parser.
- the queries are matched to one of a predetermined set of tailored analytical engines.
- 13 different analytical engines are present, but of course this number can be varied.
- the output of the natural language parser is used to select one of those preloaded query engines as a “lead” analysis engine.
- These analytical engines can utilize multiple data cubes.
- the lead analysis engine provides a graphic output representing an answer to the parsed query, but this is not (nearly) the end of the process.
- the lead analysis engine ’s initial output is displayed to the user, and provides context supporting further interaction.
- the preloaded query engines are supported by a set of further-analysis modules.
- further-analysis modules In the presently preferred embodiment, seven different further-analysis modules are present, but of course this number
- each further-analysis module provides a relevance score in proportion to its relevance to its effect upon the initial query. These are displayed in relevance order in the “intelligence insight”.
- the methodology for calculating the relevance score in each further-analysis module is standardized so these multiple scores can justifiably be ranked together.
- Standardization tables and associated distribution parameters provide metadata tables to tell the front-end interface what is different about a given retailer as compared to other retailers, which might have different conditions. This permits use of metadata to control what data users see, rather than having different code branches for different retailers. These standardization tables address questions like, e.g.,“does this particular aspect make sense within this retailer or not, and should it be hidden?” These configuration parameters switch on or off what may or may not make sense for that retailer.
- E) Another innovative and synergistic aspect is the connection to immersive visualizations which immediately illuminate the response to a query.
- a business user can interact by using natural-language queries, and then receive“answers” in a way that is retailer-centric.
- the “planogram view” shows products on virtual shelves, and also provides the ability to (virtually) lift products off the shelf and see analyses just for that product. This provides the business user with the pertinent answers with no technical barriers.
- Customer360 can provide additional datasets and/or analytical modules to provide a wider view of the universe. Queries can be routed e.g. to the customer database (or other exogenous database) or the sales database, as appropriate.
- Customer 360 helps retailers and CPG manufacturers gain a holistic real-time view of their customers, cross-channel, to fuel an AI- enabled application of deep, relevant customer insights.
- Customer 360 is a first-of-its-kind interactive customer data intelligence system that uses hundreds of shopper attributes for insights beyond just purchasing behavior.
- Customer 360 can take into consideration other data that influences how customers buy, such as e.g. other competition in the region, advertising in the region, patterns of customer affluence in the region, how customers are shopping online vs. in brick and mortar stores, and the like.
- Customer 360 can help the user identify that, e.g., a cut-throat big box store down the block is“stealing” customers from a store, or that a e.g. a new luxury health food store nearby is drawing customers who go to the Whole Foods for luxuries and then come to the user’s store for basics.
- CMS presents fundamental sales development information related to the
- SU BSTITUTE SH EET (RU LE 26) defined scope through, e.g., a background view (of a relevant map, store, or other appropriate context), and e.g. sections of an analytical information panel (including e.g. summary, customer segments, and sales trend).
- CMS presents additional /“intelligent” information in another section of the analytical information panel.
- This could be, e.g., one or more information blocks that the system automatically identifies as relevant to explain issues that are impacting the sales development in the current scope, or e.g. an information block that answers an explicit question from the user (e.g. “What are people buying instead of cookies?”).
- a question (as derived from user interaction with the system) preferably defines at least a scope, e.g. ⁇ state: ‘CA’, department: ‘SNACKS’, dateRange: ‘201601- 201652’ ⁇ , and an intent, e.g. showSales
- a scope e.g. ⁇ state: ‘CA’, department: ‘SNACKS’, dateRange: ‘201601- 201652’ ⁇
- an intent e.g. showSales
- the front-end user interface contacts the backend information providers according to the scope & intent, and then the backend provides a fundamental info block and several“intelligent information blocks” that can be displayed in the analytical information panel.
- Lead analysis engines relate to what the user is looking for, e.g.: outlier data? Whether and why customers are switching to or from competitors? Performance as measured against competitors? Customer loyalty data? Success, failure, and/or other effects of promotional campaigns?
- SU BSTITUTE SH EET (RU LE 26) be provided via, e.g., rules hard coded in the user interface, and/or an interpretation“process” that adds metadata to the information block, as in e.g. Figure 14.
- the further-analysis module(s) preferably look at, e.g., which attributes have a relevant impact on sales.
- intelligence domains can include e.g. the following, for e.g. an outlier-based lead analysis engine:
- Category management reports can include, e.g., an Event Impact Analyzer, which measures the impact of an event against key sales metrics; a Switching Analyzer, which can diagnose what is driving changes in item sales; a Basket Analyzer, which identifies cross-category merchandising opportunities; a Retail Scorecard, which provides high- level business and category reviews by retailer hierarchy and calendar; and numerous other metrics.
- an Event Impact Analyzer which measures the impact of an event against key sales metrics
- a Switching Analyzer which can diagnose what is driving changes in item sales
- a Basket Analyzer which identifies cross-category merchandising opportunities
- a Retail Scorecard which provides high- level business and category reviews by retailer hierarchy and calendar; and numerous other metrics.
- Figure 2 combines an entity relationship diagram with indications of the information being transferred. This diagram schematically shows the major steps in getting from a natural language query to an immersive-environment image result.
- step 1 uses natural language parsing (NLP) to obtain the intent and scope of the question.
- NLP natural language parsing
- Step 2 is a triage step which looks at the intent and scope (from Step 1) to ascertain which analytical module(s) should be invoked, and what should be the initial inputs to the module(s) being invoked.
- Step 3 one or more modules of the Analytical Library are invoked accordingly; currently there are 13 modules available in the Analytical Library, but of course this can change.
- Step 4 the invoked module(s) is directed to one or more of the Data Sources.
- Preferably intelligent routing is used to direct access to the highest level of data that is viable.
- Step 5 The resulting data loading is labeled as Step 5.
- a visualizer process generates a visually intuitive display, e.g. an immersive environment (Step 7).
- Figure 3 shows a different view of implementation of the information flows and processing of Figure 2.
- a natural language query appears, and natural language processing is used to derive intent and scope values. These are fed into the Analytical API, which accordingly makes a selection into the Analytical Library as described above.
- the invoked analytical modules accordingly use the Data API to get the required data, and then generate an answer. The answer is then translated into a visualization as described.
- Figure 4A shows how the different analytical module outputs are ranked to provide "highlights" for the user to explore further. From such a list, users can select "interesting" threads very quickly indeed.
- Figures 4B and 4C show two different visualization outputs.
- Figure 4B shows a geographic display, where data is overlaid onto a map.
- Figure 4C shows a Category display, where a category manager can see an image of the retail products in a category, arranged on a virtual retail display.
- This somewhat-realistic display provides a useful context for a category manager to change display order, space, and/or priorities, and also provides a way to select particular products for follow-up queries.
- Figure 5A is a larger version of Figure 4C, in which all the shelves of a store are imaged. Some data can be accessed from this level, or the user can click on particular aisles to zoom in.
- Figure 5B shows how selection of a specific product (defined by its UPC) opens a menu of available reports.
- Figure 6 shows how a geographic display can be used to show data over a larger area; in this example, three geographic divisions of a large retail chain are pulled up for comparison.
- Figure 7 shows how broad a range of questions can be parsed.
- Figure 8 shows some examples of the many types of possible queries.
- Figure 9 shows a specific example of a query, and the following figures ( Figures 10A, 10B, and 11) show more detail of its handling. This illustrates the system of Figure 2 in action.
- Figure 12 shows one sample embodiment of a high-level view of an analytical cycle according to the present inventions.
- Figure 13 shows a less-preferred (upper left) and more- preferred (lower right) ways of conveying important information to the user.
- Appendices A-G show exemplary analytical specifications for the seven presently-preferred further- analysis modules, and Appendix H shows exemplary source data for Appendices A-G. These appendices are all hereby incorporated by reference in their entirety.
- CMS/C-Suite Category Management Suite
- KVI can refer both to Key Value Indicators, and also to Key Value Items, e.g. products within a category the user may wish to track,
- SU BSTITUTE SH EET (RU LE 26) such as those that strongly drive sales in its category; indicative of important products for price investment.
- CPG Consumer Packed Goods represent the field one step earlier and/or broader in the supply chain than retail.
- Well-known CPGs include, e.g., Proctor & GambleTM, Johnson & JohnsonTM, CloroxTM, General MillsTM, etc., where retail is direct client/customer/consumer sales, such as e.g. TargetTM, AlbertsonsTM, etc.
- CPG is not strictly exclusive with retail; CPG brands like e.g. New Balance, clothing brands like Prada and Gucci, some spice companies, and others are both CPG and retail, in that they sometimes have their own stores in addition to selling to retailers.
- SUBSTITUTE SHEET (RULE 26) transactions comprising the actions of: receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the multiple further-analysis modules on the transaction data; allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
- a method for processing queries into a large database of transactions comprising the actions of: receiving and parsing a natural language query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to a large set of transaction data to thereby provide an initial output, and also providing a ranking of multiple further-analysis modules, while also running the multiple further-analysis modules on the transaction data; allowing the user to select at least one of the further-analysis modules, and displaying the results from the selected further-analysis module to the user with an immersive environment, in which items relevant to the query are made conspicuous.
- a method for processing queries into a large database of transactions comprising the actions of: receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the
- SU BSTITUTE SH EET (RU LE 26) multiple further- analysis modules on the transaction data; wherein at least one said analysis module operates not only on transactional data, but also on customer data which is not derived from transactional data; and allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
- a method for processing queries into a large database of transactions comprising the actions of: receiving and parsing a natural language query from a user, and accessing a database of transactions to thereby produce an answer to the query; and displaying an immersive environment to the user, in which objects relevant to the query are made conspicuous.
- a method for processing queries into a large database of transactions comprising the actions of: when a user inputs a natural- language query into the front-end interface, natural language processing determines the intent and scope of the request, and passes the intent and scope to the analysis module; in the analysis module, the intent and scope are used to select a primary analysis engine; from the intent and scope, the primary analysis engine determines what data cube(s) are relevant to the query at hand, and retrieves the appropriate fundamental data block(s); the fundamental data block(s) are passed to the specific/secondary analysis engine(s); in the specific/secondary analysis engine(s): the fundamental data block(s) are analyzed according to one or more sub-module metrics; relevance scores are calculated for the subsequent result block(s); and, based on the relevance scores, the specific/secondary analysis engine determines which result block(s) are
- SU BSTITUTE SH EET (RU LE 26) most important to the query at hand; one or more intelligence block(s) are populated based on the most important result block(s), and the intelligence block(s) are passed back to the primary analysis engine; the primary analysis module then returns the fundamental and intelligence blocks; the fundamental and intelligence blocks are then passed back out of the analysis module, whereupon the fundamental and intelligence blocks are translated into natural language results, visualizations, and/or other means of usefully conveying information to the user, as appropriate; and the translated results are displayed to the user.
- Systems and methods for processing queries against a large database of transactions An initial query is processed by a lead analysis engine, but processing does not stop there; the output of the lead analysis engine is used to provide general context, and is also used to select a further-processing module. Multiple results, from multiple further- processing modules, are displayed in a ranked list (or equivalent). The availability of multiple directions of further analysis helps the user to develop an intuition for what trends and drivers might be behind the numbers. Most preferably the resulting information is used to select one or more objects in an immersive environment. The object(s) so selected are visually emphasized, and displayed to the user along with other query results.
- some analysis modules not only process transaction records, but also process customer data (or other exogenous non transactional data) for use in combination with the transactional data.
- customer data will often be high-level, e.g. demographics by zip code, but this link to exogenous data provides a way to link to very detailed customer data results if available.
- Some presently-preferred embodiments use Google to capture speech, followed by Google Dialog Flow for parsing.
- Objective Objective is to utilise cubes to give a detailed understanding of the types of customers that purchase a given item.
- a defined time frame (default to be selected based on rules)
- a defined region (default to be selected based on rules)
- Comparative product set (default to be selected based on rules)
- Input data for this intelligence will either be sourced from the
- Input data for this intelligence will either be sourced from the
- Input data for this intelligence will either be sourced from the
- Step 2 Sort by descending defined product set sales defined time frame
- Step 3 Select top n to display (3 initially)
- Step 4 Add total sales figure from Source 1.2 to all rows
- Step 2 Calculate the proportion of comparative sales that the defined product set account for
- Step 4 Calculate the difference with expected comparative set sales
- Step 5 Sort by difference descending
- Step 6 Select the top 3 for display
- Step 1 Take data from source 1.2 as outlined below
- Step 2 Calculate the overall growth rate for the defined product set
- Step S Add the growth rate to source 1.1 and compute an expected defined product set sales by multiplying comprative by the growth rate
- Step 4 Calculate the difference with expected defined set sales set sales
- Step 5 Sort by the difference descending
- Step 6 Select the top 3 for display
- Step 1 Take the key output metrics for each of the Intelligence Approaches 1.1:
- Step 2 Obtain the total sales for the defined product set for the defined time period
- Step S Utilise a Retailer specific "standardisation" table
- Step 4 Make all ouptuts in step 1 a proportion of the total from step 2
- Step 5 Create a relevance score
- score (proportion of total-parm l)/parm2. This is illustrated below.
- Intelligence 2.1 to 2.3 Input Data Retailer Croup: Data to be sourced from Omni or Uber Cubes
- Source 2.1 Sales by Retailer Croups
- Step 1 Take data from source 2.1 as outlined below
- Step 2 Sort by the defined product set sales descending and show the top n
- Step 1 Take data from source 2.1 as outlined below
- Step 2 Take the total sales for the defined product set and comparative prod uct set in the defined time frame. Calculate the proportion of com parative set that the defined set accounts for.
- Step S Attach the proportion of compairson to the inputs from step 1. M ultiply the proportion on comparison by the comparative product set sales
- Step 4 Calcualte the difference to expected. M ultiply the proportion on com parison by the com parative product set sales.
- Step 5 Sort by the difference to expected descending and select the top n
- Step 1 Take data from source 2.2 as outlined below
- Step 2 Calculate the overall growth rate for the defined prod uct set
- Step 3 Add the growth rate to source 2.1 and compute an expected defined product set sales by m ultiplying com prative by the growth rate
- Step 4 Calculate the difference with expected defined set sales set sales
- Step 5 Sort by the difference descend ing and select the top n
- Step 1 Take the key output metrics for each of the Intelligence Approaches 2.1 :
- Step 2 Obtain the total sales for the defined product set for the defined time period
- Step S Utilise a Retailer specific "standardisation" table:
- Step 4 Make all ouptuts in step 1 a proportion of the total from step 2
- Step 5 Create a relevance score
- score (proportion of total-parm l)/parm2. This is illustrated below.
- Source 2.4 Total Sales by Retailer Groups
- Step 1 Take the key output metrics for each of the Intelligence Approaches
- Step 2 Obtain the Total Sales by Retailer Group for the defined product set for the defined time period
- Step 3 Utilise a Retailer specific "standardisation" table
- Step 4 Make all ouptuts in step 1 a proportion of the total from step 2
- Step 5 Create a relevance score
- score (proportion of total-parml)/parm2
- Step 1 Take the key output metrics for each of the Intelligence Approaches
- Step 2 Obtain the Total Sales by Retailer Group for the defined product set for the defined time period
- Step 3 Utilise a Retailer specific "standardisation" table
- Step 4 Make all ouptuts in step 1 a proportion of the total from step 2
- Step 5 Create a relevance score
- score (proportion of total-parml)/parm2
- Store 105 is most relevant per both module 2.5 and 2.6.
- Source 3.1 Number of Stores & Sales by Product Sets
- Source 2.2 Total Number of Stores & Total Sales
- Source 4.1 Total Sales by calendar day of month from TXN UPC TIMEOFDAY DAILY AGGR
- Source 4.2 Total Sales by Retailer Day of Week from TXN UPC TIMEOFDAY AGGR
- Source 4.3 Total Sales by hour from TXN UPC TIMEOFDAY AGGR
- Step 4 Sort to highlight the most extreme cases
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Systems and methods for processing queries against a large database of transactions. Initial queries are processed by lead analysis engines, but processing continues; the output of the lead analysis engine provides context, and aids in selecting a further-processing module. Multiple results, from multiple further-processing modules, are displayed in a ranked list (or equivalent). The availability of multiple directions of further analysis helps the user to develop an intuition for what trends and drivers might drive the numbers. Most preferably the resulting information is used to select one or more objects in an immersive environment. The selections are visualized and displayed with other results. Some analysis modules also process exogenous nontransactional data for use in combination with transactional data. The customer data will often be high-level, e.g. demographics by zip code, but this link to exogenous data permits linking to very detailed customer data results if available.
Description
CONVERSATIONAL INTELLIGENCE
ARCHITECTURE SYSTEM
CROSS-REFERENCE
[0001] Priority is claimed from US provisional application 62/598,644 filed 14 Dec. 2017, which is hereby incorporated by reference. Priority is also claimed, where available, from 15/878,275 filed 23 Jan. 2018, and therethrough from 62/449,406 filed 23 Jan. 2017, both of which are also hereby incorporated by reference.
BACKGROUND
[0002] The present application relates to computer and software systems, architectures, and methods for interfacing to a very large database which includes detailed transaction data, and more particularly to interfaces which support category managers and similar roles in retail operations.
[0003] Note that the points discussed below may reflect the hindsight gained from the disclosed inventions, and are not necessarily admitted to be prior art.
[0004] Retailers are struggling to achieve growth, especially in center store categories due to a decline in baskets and trips. Customers are becoming more and more omnichannel each day due to convenience, utilizing in-store and online delivery, and utilizing multi- store trips. Pressure is rising on retailers to blend the convenience of online with an enticing, convenient store: retailers need to drive full and loyal trips to ensure growth. Fill-in Trips are rising with the decline of destination trips, such that twenty fill-in trips might equate to only one destination
SU BSTITUTE SH EET (RU LE 26)
l
trip. The only way to offset these cycles is by optimizing offerings so as to solidify customer loyalty.
[0005] Customer data and a customer-first strategy are essential in meeting these changing needs. Focusing on inventory and margin only goes so far with the modern, ever- demanding customer, and retailers and CPGs must collaborate in order to achieve growth.
[0006] Retailers are realizing that they can receive $$ by sharing their customers’ purchase data, and in many cases companies will even pay a premium up front for this data.
[0007] A common problem faced by category managers and other high-level business users is that they: a) aren’t necessarily data analysts, and even if they are, b) they don’t have the time to do necessary analysis themselves, so c) they have to send data to data analysts for answers, who d) have a typical turnaround time of -2-10 days, when e) those answers are needed NOW!
[0008] Category Managers are not achieving growth and margin targets. They are extremely busy with daily weekly, monthly and seasonal work. They typically pull data from disparate data and tools, use Excel as their only method of analysis, and cannot make optimal decisions.
SU BSTITUTE SH EET (RU LE 26)
CONVERSATIONAL INTELLIGENCE
ARCHITECTURE SYSTEM
[0009] The present application teaches, among other innovations, new architecture (and systems and related methods) built around a conversational intelligence architecture system used for retail and CPG (Consumer Packaged Goods) management, especially (but not only) category management. There are a number of components in this architecture, and correspondingly there are a number of innovative teachings disclosed in the present application. While these innovative teachings all combine synergistically, it should be noted that different innovations, and different combinations and subcombinations of these innovations, are all believed to be useful and nonobvious. No disclosed inventions, nor combinations thereof, are disclaimed nor relinquished in any way.
[0010] Modem category management requires support for category managers to dig deeply into a large database of individual transactions. (Each transaction can e.g. correspond to one unique item on a cash register receipt.) The various disclosed inventions support detailed response to individual queries, and additionally provide rich enough data views to allow managers to“play around” with live data. This is a challenging interface requirement, particularly in combination with the need for near-real-time response.
[0011] The present application teaches that, in order for managers to make the best use of the data interface, it is important to provide answers quickly and in a format which helps managers to reach an intuitive understanding. A correct quantitative response is not good enough: the
SU BSTITUTE SH EET (RU LE 26)
present application discloses ways to provide query responses which support and enhance the user’s deep understanding and intuition.
[0012] In the following descriptions, a“user” will typically (but not exclusively) be a category manager in a large retail or CPG operation, i.e. one with tens, hundreds, or thousands of physical locations, and thousands or more of distinct products. Other users (such as store or location managers or higher-level executives) often benefit from this interface, but the category manager is a primary driving instance.
[0013] In one group of inventions, user queries are matched to one of a predetermined set of tailored analytical engines. The output of the natural language parser is used to select one of those preloaded query engines as a“lead” analysis engine. The lead analysis engine provides an initial output (preferably graphic) which represents a first-order answer to the parsed query, but this is not (nearly) the end of the process. The lead analysis engine’s initial output is displayed to the user, and provides context supporting further interaction.
[0014] The preloaded analytical engines are supported by a set of further-analysis modules. (In the presently preferred embodiment, seven different further-analysis modules are present, but of course this number can be varied.) These further-analysis modules intelligently retrieve data from ones of multiple pre-materialized“data cubes,” and accordingly provide further expansion of the response. Preferably the further-analysis modules are run in parallel, and offered to the user in a rank order. This results in a degree of interactivity in query response.
[0015] Preferably natural language queries are parsed, e.g. by a conventional parser, and immersive visualizations are used, as output, to immediately illuminate the response to a query. For example,“planogram
SU BSTITUTE SH EET (RU LE 26)
views” show images of the products on virtual shelves. These are preferably interactive, so that a user can“lift” a (virtual) product off the (virtual) shelf, and thereby see corresponding specific analyses. Note that there is a strong synergy between the natural-language input and the immersive-visualization output. The combination of these two user interfaces provides a fully usable system for users who are not inclined to quantitative thinking. At the same time, users who want to dig into quantitative relationships can easily do so. This versatility would not be available except by use of both of these user interface aspects.
[0016] A further feature is the tie in to exogenous customer data. The “Customer360” module provides a deep insight into customer behaviors and differentiation. When this module is present, queries for which the knowledge of individual customers’ activities would be helpful can be routed to this module.
SU BSTITUTE SH EET (RU LE 26)
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The disclosed inventions will be described with reference to the accompanying drawings, which show important sample embodiments and which are incorporated in the specification hereof by reference, wherein:
[0018] Figure 1 shows an overview of operations in the preferred implementation of the disclosed inventions.
[0019] Figure 2 combines an entity relationship diagram with indications of the information being transferred. This diagram schematically shows the major steps in getting from a natural language query to an immersive-environment image result.
[0020] Figure 3 shows a different view of implementation of the information flows and processing of Figure 2.
[0021] Figure 4A shows how the different analytical module outputs are ranked to provide "highlights" for the user to explore further. From such a list, users can select "interesting" threads very quickly indeed.
[0022] Figures 4B and 4C show two different visualization outputs.
[0023] Figure 5A is a larger version of Figure 4C, in which all the shelves of a store are imaged. Some data can be accessed from this level, or the user can click on particular aisles to zoom in.
[0024] Figure 5B shows how selection of a specific product (defined by its UPC) opens a menu of available reports.
[0025] Figure 6 shows how a geographic display can be used to show data over a larger area; in this example, three geographic divisions of a large retail chain are pulled up for comparison.
SU BSTITUTE SH EET (RU LE 26)
[0026] Figure 7 shows how broad a range of questions can be parsed. This can include anything from quite specific database queries to fairly vague inquiries.
[0027] Figure 8 shows some examples of the many types of possible queries.
[0028] Figure 9 shows a specific example of a query, and the following figures (Figures 10A, 10B, and 11) show more detail of its handling. This illustrates the system of Figure 2 in action.
[0029] Figure 12 shows one sample embodiment of a high-level view of an analytical cycle according to the present inventions.
[0030] Figure 13 shows a less-preferred (upper left) and more- preferred (lower right) ways of conveying important information to the user.
[0031] Figure 14 shows an exemplary interpretive metadata structure.
[0032] Figure 15 generally shows the data sets which are handled, with daily or weekly updating. The right side of this Figure shows how pre-aggregated data is generated offline, so that user queries can access these very large data sets with near-realtime responsiveness.
[0033] Figure 16 shows some sample benefits and details of the Customer 360 modules.
[0034] Figures 17A-17B show two different exemplary planogram views.
SU BSTITUTE SH EET (RU LE 26)
DETAILED DESCRIPTION OF SAMPLE EMBODIMENTS
[0035] The numerous innovative teachings of the present application will be described with particular reference to presently preferred embodiments (by way of example, and not of limitation). The present application describes several inventions, and none of the statements below should be taken as limiting the claims generally.
[0036] The present application teaches new architecture and systems and related methods for a conversational intelligence architecture system used in a retail setting. There are a number of components in this architecture, and correspondingly there are a number of innovative teachings disclosed in the present application. While these innovative teachings all combine synergistically, it should be noted that different innovations, and different subcombinations of these innovations, are all believed to be useful and nonobvious. No disclosed inventions, nor combinations thereof, are disclaimed nor relinquished in any way.
[0037] In one group of inventions, user queries are matched to one of a predetermined set of tailored analytical engines. (For example, in the presently preferred embodiment, 13 different analytical engines are present, but of course this number can be varied.) The output of the natural language parser is used to select one of those preloaded query engines as a“lead” analysis module. The lead analysis engine provides a graphic output representing an answer to the parsed query, but this is not (nearly) the end of the process. The lead analysis engine’s initial output is displayed to the user, and provides context supporting further interaction. These analytical engines intelligently direct a query to the right one of
SU BSTITUTE SH EET (RU LE 26)
multiple pre-materialized“data cubes” - seven are used in the current preferred implementation.
[0038] The preloaded analytical engines are supported by a set of further-analysis modules. (In the presently preferred embodiment, seven different further-analysis modules are present, but of course this number can be varied.) These further-analysis modules intelligently retrieve data (from the data cubes) and run analyses corresponding to further queries within the context of the output of the lead analysis engine.
[0039] Preferably natural language queries are parsed, e.g. by a conventional parser, and immersive visualizations are used, as output, to immediately illuminate the response to a query. For example,“planogram views” (as seen in e.g. Figures 17A-17B) show images of the products on virtual shelves. These are preferably interactive, so that a user can “lift” a (virtual) product off the (virtual) shelf, and thereby see corresponding specific analyses.
[0040] Note that there is a strong synergy between the natural- language input and the immersive-visualization output. The combination of these two user interfaces provides a fully usable system for users who are not inclined to quantitative thinking. At the same time, users who want to dig into quantitative relationships can easily do so. This versatility would not be available except by use of both of these user interface aspects.
[0041] The further-analysis modules too are not the end of the operation. For each query, each further-analysis module is scored on its effect upon the initial query. These results are displayed in relevance order in the“intelligence insight”. The methodology for calculating the
SU BSTITUTE SH EET (RU LE 26)
relevance score in each further- analysis module is standardized so these multiple scores can justifiably be ranked together.
[0042] A further feature is the tie-in to exogenous data, such as customer data. The“Customer360” module provides a deep insight into customer behaviors and differentiation. When this module is present, queries for which the knowledge of individual customers’ activities would be helpful can be routed to this module.
[0043] This optional expansion uses an additional dataset AND analytical modules and knowledge base to provide a wider view of the universe. Using the above architecture, queries can be routed to the customer database or the sales database as appropriate. The “Customer360” module permits the introduction of e.g. other external factors not derivable from the transactional data itself.
[0044] In one sample embodiment like that of Figure 1, a full query cycle, from user input to final output, can be e.g. as follows:
a. The user inputs a query into the front-end interface in natural language;
b. Natural Language Processing (NLP) determines the intent and scope of the request;
c. The intent and scope are passed to the analysis module;
d. In the analysis module:
i. The intent and scope are used to determine the lead analysis engine;
ii. In the lead analysis engine:
1. From the intent and scope, the lead analysis engine determines what data cube(s) are relevant to the query
SU BSTITUTE SH EET (RU LE 26)
at hand, and retrieves the appropriate fundamental data block(s);
A. In some embodiments, the fundamental data block(s) are translated into appropriate visualization(s) and displayed to the user at this point. The user can then select one or more desired further-analysis engines. In other embodiments, the intent and scope include the selection of one or more further-analysis modules, and the fundamental data block(s) are returned later. In still other embodiments, this can be different.
The fundamental data block(s) are passed to the further-analysis engine(s);
In the further-analysis engine(s):
A. The fundamental data block(s) are analyzed according to one or more sub-module metrics;
B. Relevance scores are calculated for the subsequent result block(s);
C. Based on the relevance scores, the further- analysis engine determines which result block(s) are most important to the query at hand (e.g., when using an outlier-based lead analysis engine, this can be the most significant outlier(s));
D. One or more intelligence block(s) are populated based on the most important result block(s), and
SU BSTITUTE SH EET (RU LE 26)
the intelligence block(s) are passed back to the lead analysis engine;
4. The lead analysis engine then returns the fundamental and intelligence blocks;
iii. The fundamental and intelligence blocks are then passed back out of the analysis module;
e. The fundamental and intelligence blocks are translated into natural language results, visualizations, and/or other means of usefully conveying information to the user, as appropriate; and
f The translated results are displayed to the user.
[0045] Three aspects of the instant inventions together provide a holistic solution which is unique in the industry:
[0046] 1) Simple Business Natural Language Questions: Simple natural-language business questions result in clear, decisive, insightful answers.
[0047] 2) Powered by AI Technology: Al techniques that are both cutting-edge and also based on solutions that have evolved from years of analytical 1P that have driven Symphony’s customers now delivered in seconds.
[0048] 3) Extremely Fast Datasets: lncredibly fast datasets, honed to deliver lightning instantaneous data for Al to consume, provide substantially real-time analysis that delivers answers when the user needs them, not days or weeks later.
[0049] Data cubes are essentially one step up from raw transactional data, and are aggregated over one source. A primary source of raw data is transactional data, typically with one line of data per unique product per basket transaction. Ten items in a basket means ten lines of data, so that
SU BSTITUTE SH EET (RU LE 26)
two years of data for an exemplary store or group might comprise two billion lines of data or more. Data cubes are preferably calculated offline, typically when the source data is updated (which is preferably e.g. weekly).
[0050] Data cubes store multiple aggregations of data above the primary transactional data, and represent different ways to answer different questions, providing e.g. easy ways to retrieve division totals by week. Data cubes “slice” and aggregate data one way or another, preloading answers to various queries. Presently, queries are preferably framed in terms of product, geography, time of sale, and customer. These pre-prepared groupings allow near-instantaneous answers to various data queries. Instead of having to search through e.g. billions of rows of raw data, these pre-materialized data cubes anticipate, organize, and prepare data according to the questions addressed by e.g. the relevant led analysis engine(s).
[0051] “FACTS” is a mnemonic acronym which refers to: the Frequency of customer visits, Advocated Categories the customer buys, Total Spend of the customer; these elements give some overall understanding of customers’ engagement and satisfaction with a retailer. This data is used to compile one presently-preferred data cube.
[0052] Another presently-preferred data cube, known as TmPrice™, looks at customer analysis, e.g. by identifying whether certain customers are more price-driven, more quality-driven, or price/quality neutral.
[0053] People who need the analysis (particularly but not exclusively category managers): a) aren’t necessarily data analysts, and even if they are, b) don’t have the time to do analysis themselves, so c) have to send data to data analysts, which d) has a typical turnaround time of -2-10
SU BSTITUTE SH EET (RU LE 26)
days, when e) they need those answers NOW! The present inventions can give those crucial answers in seconds or minutes, allowing retailers and CPGs the necessary flexibility to react to emerging trends in near-real time.
[0054] Presently-preferred further-analysis modules address data according to seven analytical frameworks (though of course more, fewer, and/or different further-analysis modules are possible:
[0055] 1 - Who
[0056] 2 - Where
[0057] 3 - What Sells More
[0058] 4 - When
[0059] 5 - What Sells Together
[0060] 6 - Trial Vs. Repeat Sales
[0061] 7 - Sales Drivers
[0062] Some possible aggregated data cubes address the following considerations, as seen in e.g. Figure 15:
[0063] By Product: includes sales sorted by e.g. total, major department, department category, subcategory, manufacturer, and brand.
[0064] By Time: includes sales sorted by e.g. week, period, quarter, and year, as well as year-to-date and rolling periods of e.g. 4, 12, 13, 26, and 52 weeks.
[0065] By Geography/Geographical Consideration: includes sales sorted by e.g. total, region, format, and store.
[0066] By Customer Segments: includes sales as categorized according to data from e.g. FACTS, market segment, TruPrice, and/or retailer-specific customer segmentations.
SU BSTITUTE SH EET (RU LE 26)
[0067] The present inventions can provide the following insight benefits, as seen in e.g. Figure 16:
[0068] Assortment: Understand which items are most important to Primary shoppers; Measure customer switching among brands; Measure and benchmark new products.
[0069] Promotions: Understand effectiveness for best shoppers;
Identify ineffective promotions.
[0070] Pricing: Identify KVIs for investment.
[0071] Affinities: Understand categories and brands that are purchased together to optimize merchandising.
[0072] One sample embodiment can operate as follows, as seen in e.g.
Figures 2 and 3.
[0073] A) Preferably natural language queries are parsed, e.g. by a conventional parser.
[0074] B) The queries are matched to one of a predetermined set of tailored analytical engines. (In the presently preferred embodiment, 13 different analytical engines are present, but of course this number can be varied.) The output of the natural language parser is used to select one of those preloaded query engines as a “lead” analysis engine. These analytical engines can utilize multiple data cubes. The lead analysis engine provides a graphic output representing an answer to the parsed query, but this is not (nearly) the end of the process. The lead analysis engine’s initial output is displayed to the user, and provides context supporting further interaction.
[0075] C) The preloaded query engines are supported by a set of further-analysis modules. (In the presently preferred embodiment, seven different further-analysis modules are present, but of course this number
SU BSTITUTE SH EET (RU LE 26)
can be varied.) These further-analysis modules intelligently retrieve data again from the“data cubes” and run analyses corresponding to further queries within the context of the output of the lead analysis engine.
[0076] D) The further-analysis modules too are not the end of the operation. Each further-analysis module provides a relevance score in proportion to its relevance to its effect upon the initial query. These are displayed in relevance order in the “intelligence insight”. The methodology for calculating the relevance score in each further-analysis module is standardized so these multiple scores can justifiably be ranked together.
[0077] Standardization tables and associated distribution parameters provide metadata tables to tell the front-end interface what is different about a given retailer as compared to other retailers, which might have different conditions. This permits use of metadata to control what data users see, rather than having different code branches for different retailers. These standardization tables address questions like, e.g.,“does this particular aspect make sense within this retailer or not, and should it be hidden?” These configuration parameters switch on or off what may or may not make sense for that retailer.
[0078] E) Another innovative and synergistic aspect is the connection to immersive visualizations which immediately illuminate the response to a query. A business user can interact by using natural-language queries, and then receive“answers” in a way that is retailer-centric. For example, the “planogram view” shows products on virtual shelves, and also provides the ability to (virtually) lift products off the shelf and see analyses just for that product. This provides the business user with the pertinent answers with no technical barriers.
SU BSTITUTE SH EET (RU LE 26)
[0079] F) In addition to Sales and Customer Segmentation, knowledge can be added by an additional dataset describing other information on the customers who shop in the store(s) in question. “Customer360” can provide deep insight into customer behaviors beyond what they do in the subject store itself.
[0080] Optionally, Customer360 (C360) can provide additional datasets and/or analytical modules to provide a wider view of the universe. Queries can be routed e.g. to the customer database (or other exogenous database) or the sales database, as appropriate.
[0081] Customer 360 helps retailers and CPG manufacturers gain a holistic real-time view of their customers, cross-channel, to fuel an AI- enabled application of deep, relevant customer insights. Customer 360 is a first-of-its-kind interactive customer data intelligence system that uses hundreds of shopper attributes for insights beyond just purchasing behavior. Customer 360 can take into consideration other data that influences how customers buy, such as e.g. other competition in the region, advertising in the region, patterns of customer affluence in the region, how customers are shopping online vs. in brick and mortar stores, and the like.
[0082] For example, Customer 360 can help the user identify that, e.g., a cut-throat big box store down the block is“stealing” customers from a store, or that a e.g. a new luxury health food store nearby is drawing customers who go to the Whole Foods for luxuries and then come to the user’s store for basics.
[0083] The user asks a question that defines a scope (e.g. Snacks in California) and an intent (e.g. show me sales development). CMS presents fundamental sales development information related to the
SU BSTITUTE SH EET (RU LE 26)
defined scope through, e.g., a background view (of a relevant map, store, or other appropriate context), and e.g. sections of an analytical information panel (including e.g. summary, customer segments, and sales trend).
[0084] CMS presents additional /“intelligent” information in another section of the analytical information panel. This could be, e.g., one or more information blocks that the system automatically identifies as relevant to explain issues that are impacting the sales development in the current scope, or e.g. an information block that answers an explicit question from the user (e.g. “What are people buying instead of cookies?”).
[0085] From a behind-the-scenes perspective, a question (as derived from user interaction with the system) preferably defines at least a scope, e.g. {state: ‘CA’, department: ‘SNACKS’, dateRange: ‘201601- 201652’}, and an intent, e.g. showSales | showTop departments | showSwitching for Cookies.
[0086] The front-end user interface contacts the backend information providers according to the scope & intent, and then the backend provides a fundamental info block and several“intelligent information blocks” that can be displayed in the analytical information panel.
[0087] Lead analysis engines relate to what the user is looking for, e.g.: outlier data? Whether and why customers are switching to or from competitors? Performance as measured against competitors? Customer loyalty data? Success, failure, and/or other effects of promotional campaigns?
[0088] When relevant, fundamental data is most preferably presented together with some level of interpretation, as in e.g. Figure 13. This can
SU BSTITUTE SH EET (RU LE 26)
be provided via, e.g., rules hard coded in the user interface, and/or an interpretation“process” that adds metadata to the information block, as in e.g. Figure 14.
[0089] The further-analysis module(s) preferably look at, e.g., which attributes have a relevant impact on sales.
[0090] In some embodiments, intelligence domains can include e.g. the following, for e.g. an outlier-based lead analysis engine:
[0091] Who is buying, e.g. New customers, Exclusive customers, lost customers, customer segments, Breadth of purchase?
[0092] Where are they buying, e.g. Top selling stores, Bottom selling stores, best performing stores, worst performing stores, expansion opportunities, sales by geography?
[0093] When are they buying, e.g. Sales by Day of Week, Sales by time
[0094] What sells more, e.g. Products switching from, what people buy instead of, top selling products across stores, top selling products by store format, what people buy with ...
[0095] What sells better together, e.g. What are the other products / categories that are bought together? What other brands are customers buying with my brand?
[0096] What is the impact of promotions?
[0097] What are trial and repeat outcomes, e.g. What are the new product launches in category? How did they perform?
[0098] How does loyalty impact growth, e.g. What groups of products are driving new customers into the category? What are my most loyal brands? Which groups of customers have driven growth?
SU BSTITUTE SH EET (RU LE 26)
[0099] What is the source of volume, e.g. Did I switch sales from one product to another? For this new products what did people used to buy? Do sales come from people new to the category?
[00100] Category management reports can include, e.g., an Event Impact Analyzer, which measures the impact of an event against key sales metrics; a Switching Analyzer, which can diagnose what is driving changes in item sales; a Basket Analyzer, which identifies cross-category merchandising opportunities; a Retail Scorecard, which provides high- level business and category reviews by retailer hierarchy and calendar; and numerous other metrics.
[00101] Figure 2 combines an entity relationship diagram with indications of the information being transferred. This diagram schematically shows the major steps in getting from a natural language query to an immersive-environment image result.
[00102] Starting with any question, step 1 uses natural language parsing (NLP) to obtain the intent and scope of the question.
[00103] Step 2 is a triage step which looks at the intent and scope (from Step 1) to ascertain which analytical module(s) should be invoked, and what should be the initial inputs to the module(s) being invoked.
[00104] In Step 3, one or more modules of the Analytical Library are invoked accordingly; currently there are 13 modules available in the Analytical Library, but of course this can change.
[00105] In Step 4, the invoked module(s) is directed to one or more of the Data Sources. Preferably intelligent routing is used to direct access to the highest level of data that is viable.
[00106] The resulting data loading is labeled as Step 5.
SU BSTITUTE SH EET (RU LE 26)
[00107] The invoked module(s) of the Analytical Library now provide their outputs, which are ranked according to (in this example) novelty and relevance to the query. This produces an answer in Step 6.
[00108] Finally, a visualizer process generates a visually intuitive display, e.g. an immersive environment (Step 7).
[00109] Figure 3 shows a different view of implementation of the information flows and processing of Figure 2. A natural language query appears, and natural language processing is used to derive intent and scope values. These are fed into the Analytical API, which accordingly makes a selection into the Analytical Library as described above. The invoked analytical modules accordingly use the Data API to get the required data, and then generate an answer. The answer is then translated into a visualization as described.
[00110] Figure 4A shows how the different analytical module outputs are ranked to provide "highlights" for the user to explore further. From such a list, users can select "interesting" threads very quickly indeed.
[00111] Figures 4B and 4C show two different visualization outputs.
Figure 4B shows a geographic display, where data is overlaid onto a map.
[00112] Figure 4C shows a Category display, where a category manager can see an image of the retail products in a category, arranged on a virtual retail display. This somewhat-realistic display provides a useful context for a category manager to change display order, space, and/or priorities, and also provides a way to select particular products for follow-up queries.
[00113] Figure 5A is a larger version of Figure 4C, in which all the shelves of a store are imaged. Some data can be accessed from this level, or the user can click on particular aisles to zoom in.
SU BSTITUTE SH EET (RU LE 26)
[00114] Figure 5B shows how selection of a specific product (defined by its UPC) opens a menu of available reports.
[00115]
[00116] Figure 6 shows how a geographic display can be used to show data over a larger area; in this example, three geographic divisions of a large retail chain are pulled up for comparison.
[00117] Figure 7 shows how broad a range of questions can be parsed.
This can include anything from quite specific database queries to fairly vague inquiries.
[00118] Figure 8 shows some examples of the many types of possible queries.
[00119] Figure 9 shows a specific example of a query, and the following figures (Figures 10A, 10B, and 11) show more detail of its handling. This illustrates the system of Figure 2 in action.
[00120] Figure 12 shows one sample embodiment of a high-level view of an analytical cycle according to the present inventions.
[00121] Figure 13 shows a less-preferred (upper left) and more- preferred (lower right) ways of conveying important information to the user.
[00122] Appendices A-G show exemplary analytical specifications for the seven presently-preferred further- analysis modules, and Appendix H shows exemplary source data for Appendices A-G. These appendices are all hereby incorporated by reference in their entirety.
[00123] Explanation of Terminology:
[00124] CMS/C-Suite: Category Management Suite
[00125] KVI can refer both to Key Value Indicators, and also to Key Value Items, e.g. products within a category the user may wish to track,
SU BSTITUTE SH EET (RU LE 26)
such as those that strongly drive sales in its category; indicative of important products for price investment.
[00126] C360: Customer 360° Intelligence
[00127] CPG: Consumer Packed Goods represent the field one step earlier and/or broader in the supply chain than retail. Well-known CPGs include, e.g., Proctor & Gamble™, Johnson & Johnson™, Clorox™, General Mills™, etc., where retail is direct client/customer/consumer sales, such as e.g. Target™, Albertsons™, etc. CPG is not strictly exclusive with retail; CPG brands like e.g. New Balance, clothing brands like Prada and Gucci, some spice companies, and others are both CPG and retail, in that they sometimes have their own stores in addition to selling to retailers.
Advantages
[00128] The disclosed innovations, in various embodiments, provide one or more of at least the following advantages. However, not all of these advantages result from every one of the innovations disclosed, and this list of advantages does not limit the various claimed inventions.
• Fast response to managers’ database inquiries;
• Intuitive presentation of query responses;
• Augmentation of transactional data with exogenous data, such as customer data; and
• Automated data access, analysis and decision support.
[00129] According to some but not necessarily all embodiments, there is provided: A method for processing queries into a large database of
SUBSTITUTE SHEET (RULE 26)
transactions, comprising the actions of: receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the multiple further-analysis modules on the transaction data; allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
[00130] According to some but not necessarily all embodiments, there is provided: a method for processing queries into a large database of transactions, comprising the actions of: receiving and parsing a natural language query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to a large set of transaction data to thereby provide an initial output, and also providing a ranking of multiple further-analysis modules, while also running the multiple further-analysis modules on the transaction data; allowing the user to select at least one of the further-analysis modules, and displaying the results from the selected further-analysis module to the user with an immersive environment, in which items relevant to the query are made conspicuous.
[00131] According to some but not necessarily all embodiments, there is provided: A method for processing queries into a large database of transactions, comprising the actions of: receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module; applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the
SU BSTITUTE SH EET (RU LE 26)
multiple further- analysis modules on the transaction data; wherein at least one said analysis module operates not only on transactional data, but also on customer data which is not derived from transactional data; and allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
[00132] According to some but not necessarily all embodiments, there is provided: A method for processing queries into a large database of transactions, comprising the actions of: receiving and parsing a natural language query from a user, and accessing a database of transactions to thereby produce an answer to the query; and displaying an immersive environment to the user, in which objects relevant to the query are made conspicuous.
[00133] According to some but not necessarily all embodiments, there is provided: A method for processing queries into a large database of transactions, comprising the actions of: when a user inputs a natural- language query into the front-end interface, natural language processing determines the intent and scope of the request, and passes the intent and scope to the analysis module; in the analysis module, the intent and scope are used to select a primary analysis engine; from the intent and scope, the primary analysis engine determines what data cube(s) are relevant to the query at hand, and retrieves the appropriate fundamental data block(s); the fundamental data block(s) are passed to the specific/secondary analysis engine(s); in the specific/secondary analysis engine(s): the fundamental data block(s) are analyzed according to one or more sub-module metrics; relevance scores are calculated for the subsequent result block(s); and, based on the relevance scores, the specific/secondary analysis engine determines which result block(s) are
SU BSTITUTE SH EET (RU LE 26)
most important to the query at hand; one or more intelligence block(s) are populated based on the most important result block(s), and the intelligence block(s) are passed back to the primary analysis engine; the primary analysis module then returns the fundamental and intelligence blocks; the fundamental and intelligence blocks are then passed back out of the analysis module, whereupon the fundamental and intelligence blocks are translated into natural language results, visualizations, and/or other means of usefully conveying information to the user, as appropriate; and the translated results are displayed to the user.
[00134] According to some but not necessarily all embodiments, there is provided: Systems and methods for processing queries against a large database of transactions. An initial query is processed by a lead analysis engine, but processing does not stop there; the output of the lead analysis engine is used to provide general context, and is also used to select a further-processing module. Multiple results, from multiple further- processing modules, are displayed in a ranked list (or equivalent). The availability of multiple directions of further analysis helps the user to develop an intuition for what trends and drivers might be behind the numbers. Most preferably the resulting information is used to select one or more objects in an immersive environment. The object(s) so selected are visually emphasized, and displayed to the user along with other query results. Optionally, some analysis modules not only process transaction records, but also process customer data (or other exogenous non transactional data) for use in combination with the transactional data. The customer data will often be high-level, e.g. demographics by zip code, but this link to exogenous data provides a way to link to very detailed customer data results if available.
SU BSTITUTE SH EET (RU LE 26)
Modifications and Variations
[00135] As will be recognized by those skilled in the art, the innovative concepts described in the present application can be modified and varied over a tremendous range of applications, and accordingly the scope of patented subject matter is not limited by any of the specific exemplary teachings given. It is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.
[00136] Additional general background, which helps to show variations and implementations, as well as some features which can be implemented synergistically with the inventions claimed below, may be found in the following US patent applications. All of these applications have at least some common ownership, copendency, and inventorship with the present application, and all of them, as well as any material directly or indirectly incorporated within them, are hereby incorporated by reference: US applications 15/878,275 (SEYC-11) and 62/349,543 (SEYC-10).
[00137] It should be noted that, while the terms“retail” and“retailer” are used throughout this application, the terms are used for simplicity, and should be understood to include both retail and CPG (Consumer Packed Goods) applications.
[00138] Some presently-preferred embodiments use Google to capture speech, followed by Google Dialog Flow for parsing.
[00139] None of the description in the present application should be read as implying that any particular element, step, or function is an essential element which must be included in the claim scope: THE SCOPE OF PATENTED SUBJECT MATTER IS DEFINED ONLY BY
SU BSTITUTE SH EET (RU LE 26)
THE ALLOWED CLAIMS. Moreover, none of these claims are intended to invoke paragraph six of 35 USC section 112 unless the exact words“means for” are followed by a participle.
[00140] The claims as filed are intended to be as comprehensive as possible, and NO subject matter is intentionally relinquished, dedicated, or abandoned.
SU BSTITUTE SH EET (RU LE 26)
APPENDIX A
Intelligence Who is buying
ID 1
Author Withiel Cole
Date 30/05/2017
Objective Objective is to utilise cubes to give a detailed understanding of the types of customers that purchase a given item.
This should consider:
Base EYC segmentations (and basket alternatives):
• FACTS / ACTS
• TruPrice / TP Basket
• ShopStyles
• Net Promoter (TBC)
PPCQ Input Segmentations:
• Price Sensitive
• Ad Responders Sensitive
• Choice Sensitive
• Quality Sensitive
• Service Sensitive (TBC)
• Promo Sensitive (TBC)
• CRM Sensitive (TBC)
Input Required Specifications:
Parameters A defined product set:
• Product Hierarchy (Department / Category / Sub Category)
o NOTE "My" category / brand / department should be interpreted from understanding of user
• A brand
• A specific product (or group of products)
A defined time frame (default to be selected based on rules)
A defined region (default to be selected based on rules)
Comparative product set (default to be selected based on rules)
Comparative time frame (default to be selected based on rules)
Processes / Intelligence Who buys the most? (a group with absolutely the most Calculations Element: importance)
ID: 1.1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX A
Inputs: Input data for this intelligence will either be sourced from the
Omni Cube or the Uber cube.
Requires ability to create sales by customer group for defined time periods.
Approach to define and select correct data to be defined with architecture teams.
Calculations: Detailed calculation steps are outlined in spreadsheet
Intelligence Who has the strongest affinity?
Element:
ID: 1.2
Inputs: Input data for this intelligence will either be sourced from the
Omni Cube or the Uber cube.
Requires ability to create sales by customer group for defined time periods.
Approach to define and select correct data to be defined with architecture teams.
Calculations: Detailed calculation steps are outlined in spreadsheet
Intelligence Who has the strongest growth?
Element:
ID: 1.3
Inputs: Input data for this intelligence will either be sourced from the
Omni Cube or the Uber cube.
Requires ability to create sales by customer group for defined time periods.
Approach to define and select correct data to be defined with architecture teams.
Calculations: Detailed calculation steps are outlined in spreadsheet
Supporting CMS3 Intelligence Calculations Brief Who WiV02.xlsx
Intelligence!: Input Data: Data to be sourced from Omni / Uber Cubes
Source 1.1: Sales by Segment groups
INT l.l
Inteligence 1.1: Who buys the most?
Step 1:
Take data from source 1.1 as outlined below
Step 3: Select top n to display (3 initially)
Step 4: Add total sales figure from Source 1.2 to all rows
Visualization: Create "Remainder" Sales (Defined product set sales defined time frame total - Defined product set sales defined time frame)
PS: Customers most im portant for product set
Total Spend in Prod uct Set
INT 1.2
Inteligence 1.2
Who has the strongest affinity
SU BSTITUTE SH EET (RU LE 26)
APPENDIX A
Step 1:
Take data from source 1.2 as outlined below
Step 2: Calculate the proportion of comparative sales that the defined product set account for
Step 3
Add the proportion of comparative to source 1.1 and compute an expected defined product set sales by multiplying comparative by the expected proportion
Step 4: Calculate the difference with expected comparative set sales
Step 6: Select the top 3 for display
SUBSTITUTE SHEET (RULE 26)
APPENDIX A
Visualization: Calculate remainder columns
(or:)
FACTS S - Product Set more important than for rest of customer base Sales Relative to Comparative Product Set:
SUBSTITUTE SHEET (RULE 26)
APPENDIX A
INT_1.3: Who has the strongest growth
Step 1: Take data from source 1.2 as outlined below
Step 2: Calculate the overall growth rate for the defined product set
Step S: Add the growth rate to source 1.1 and compute an expected defined product set sales by multiplying comprative by the growth rate
SUBSTITUTE SHEET (RULE 26)
APPENDIX A
Step 5: Sort by the difference descending
Equivalizing Scores
SUBSTITUTE SHEET (RULE 26)
APPENDIX A
Step 1: Take the key output metrics for each of the Intelligence Approaches 1.1:
1.2
1.3
Step 2: Obtain the total sales for the defined product set for the defined time period
Step S: Utilise a Retailer specific "standardisation" table
Further details to follow, parameters only calculated for Intelligence 1 at present
(metric_l_l-
(0.1909524S0S2SS66S024))/(2.25*0.094516715662 1699) as
score_l_l,
(metric_l_2-0.0008127695323299980)/0.0403303692130398
as score_l_2 ,
(metric_l_3+0.0010840616960575163)/(0.0459988637381487/1.5) as
score 1 3
Step 4: Make all ouptuts in step 1 a proportion of the total from step 2
Step 1 Step 2
0
0
0.019851117
0.461901221
-0.46
0.114006515
Step 5 : Create a relevance score
To determ ine the extent to which each output is of interest a score is calculated as follows: score = (proportion of total-parm l)/parm2. This is illustrated below.
SU BSTITUTE SH EET (RU LE 26)
APPENDIX A
In the above exam ple intelligence 1.2 shopping segment 3 from FACTS is the most insightful (unusual) this would be the first information to display on the intelligence tab
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
SUBSTITUTE SHEET (RULE 26)
APPENDIX B
SUBSTITUTE SHEET (RULE 26)
APPENDIX B
Intelligence 2.1 to 2.3: Input Data Retailer Croup: Data to be sourced from Omni or Uber Cubes
Source 2.1: Sales by Retailer Croups
SUBSTITUTE SHEET (RULE 26)
APPENDIX B
Source 2.2: Total Sales
Intelligence 2.1: Which group sells the most?
Step 1 : Take data from source 2.1 as outlined below
Step 2 : Sort by the defined product set sales descending and show the top n
Visualization: Suggest display the actual value, but use the proportion to colour / represent where there is overrepresentation
Follow-ups: This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.1 Who is buying my products in the topselling Retailer Group ?
1.1 Who is buying y products in the bottomselling Retailer Group?
2.2 What are the topselling stores in the topselling Retailer Group?
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Intelligence 2.2: What Retailer Group has the strongest affinity?
Step 1 : Take data from source 2.1 as outlined below
Step 2 : Take the total sales for the defined product set and comparative prod uct set in the defined time frame. Calculate the proportion of com parative set that the defined set accounts for.
Step S : Attach the proportion of compairson to the inputs from step 1. M ultiply the proportion on comparison by the comparative product set sales
Step 4: Calcualte the difference to expected. M ultiply the proportion on com parison by the com parative product set sales.
Step 5 : Sort by the difference to expected descending and select the top n
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Visualization: Suggest display the actual value, but use the proportion to colour / represent where there is overrepresentation
Follow-ups: This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.1 Who is buying my products in topselling Retailer Group ?
1.1 Who is buying y products in bottomselling Retailer Group?
2.2 What are the topselling stores in topselling Retailer Group?
Intelligence 2.3: Wh ich Retailer Group has the strongest growth?
Step 1 : Take data from source 2.2 as outlined below
Step 2 : Calculate the overall growth rate for the defined prod uct set
Step 3 : Add the growth rate to source 2.1 and compute an expected defined product set sales by m ultiplying com prative by the growth rate
Step 4: Calculate the difference with expected defined set sales set sales
Step 5 : Sort by the difference descend ing and select the top n
Visualization: Show the Actual growth and the difference with expected to highlight where sales are growi ng more or less
Follow-ups: This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.3 Who is driving the growth in topgrowth Retailer Group?
1.3 Who is driving the decline in bottomgrowth Retailer Group?
2.4 What are the top performing stores in topgrowth Retailer Group?
2.4 What are the bottom performing stores in bottomgrowth Retailer Group?
Intel ligence 2 - Retai ler Group: Eq u ivalent "Relevance" Scoring
Step 1 : Take the key output metrics for each of the Intelligence Approaches 2.1 :
2 2
2.3
Step 2 : Obtain the total sales for the defined product set for the defined time period
Step S : Utilise a Retailer specific "standardisation" table:
Utilise a standard isation table as specified below
Intelligence
Component Retailer Parml Parm2
1.1 ABS 0.19095243 0.21266261
1.2 ABS 0.00081277 0.040330369
1.3 ABS 0.001084062 0.030665909
2.1 ABS 0.115821526 0.319146577
2.2 ABS 0.003166469 0.069450835
2.3 ABS 0.000725241 0.053688854
Further details to follow, parameters only calculated for Intelligence 1 at present
Step 4: Make all ouptuts in step 1 a proportion of the total from step 2
2.1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
2.2
2.3
Step 5 : Create a relevance score
To determ ine the extent to which each output is of interest a score is calculated as follows: score = (proportion of total-parm l)/parm2. This is illustrated below.
In the above exam ple intelligence 2.1 Retailer C roup North is the most insightful (unusual) this would be the first information to d isplay on the intelligence tab
Intelligence 2.4 to 2.6: Input Data Store
Data to be sourced from Uber or Uber Cube
Source 2.3: Sales by Store
Source 2.4: Total Sales by Retailer Groups
Intelligence 2.4
What store sells the most?
Step 1
Take data from source 2.3 outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 2
Sort by the defined product set sales descending and show the top n
Visualisation
Suggest display the actual value, but use the proportion to colour / represent where there is overrepresentation
Follow ups
This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.1 Who is buying my products in topselling store ?
1.1 Who is buying y products in bottomselling store?
2.2 What are the topselling stores in topselling store?
Intelligence 2.5
What store has the strongest affinity?
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 1
Take data from source 2.3 as outlined below
Step 2
Take the total sales by Retailer Group (source 2.4) for the defined product set and comparative product set in the defined time frame
Calculate the proportion of comparative set that the defined set accounts for
Step 3
Attach the proportion of compairson to the inputs from step 1
Multiply the proportion on comparison by the comparative product set sales
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 4
Calculate the difference to expected
Multiply the proportion on comparison by the comparative product set sales
Step 5
Sort by the difference to expected descending and select the top n
Visualisation
Suggest display the actual value, but use the proportion to colour / represent where there is overrepresentation
Follow ups
This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.1 Who is buying my products in topselling store?
1.1 Who is buying my products in bottomselling store?
2.2 What are the topselling stores in topselling store?
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Intelligence 2.6 (mod)
Which store has the strongest growth?
Step 1
Take data from source 2.4 as outlined below
Step 2
Calculate the overall growth rates for the defined product set
Step 3
Add the growth rate to source 2.3 and compute an expected defined product set sales by multiplying comparative by the growth rate
Step 4
Calculate the difference with expected defined set sales set sales
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 5
Sort by the difference descending and select the top n
Visualisation
Show the Actual growth and the difference with expected to highlight where sales are growing more or less
Follow ups
This section lists potential follow up questions and approaches to take these from the findings from this intelligence
1.3 Who is driving the growth in topgrowth store?
1.3 Who is driving the decline in bottomgrowth store?
2.4 What are the top performing stores in topgrowth store?
2.4 What are the bottom performing stores in bottomgrowth store?
Intelligence 2 - Stores: Equivalent "Relevance" Scoring
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 1: Take the key output metrics for each of the Intelligence Approaches
where did this come from?!?
Step 2: Obtain the Total Sales by Retailer Group for the defined product set for the defined time period
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 3: Utilise a Retailer specific "standardisation" table
Utilise a standardisation table as specified below
Further details to follow, parameters only calculated for Intelligence 1 at present
Step 4: Make all ouptuts in step 1 a proportion of the total from step 2
2.4
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
2.6
Step 5: Create a relevance score
To determine the extent to which each output is of interest a score is calculated as follows score = (proportion of total-parml)/parm2
This is illustrated below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Store 105 is most insightful.
SUBSTITUTE SHEET (RULE 26)
APPENDIX B
Intelligence 2 - Stores: Equivalent "Relevance" Scoring - corrected
Step 1: Take the key output metrics for each of the Intelligence Approaches
2.4
2.5
2.6
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Step 2: Obtain the Total Sales by Retailer Group for the defined product set for the defined time period
Step 3: Utilise a Retailer specific "standardisation" table
Utilise a standardisation table as specified below
Further details to follow, parameters only calculated for Intelligence 1 at present
Step 4: Make all ouptuts in step 1 a proportion of the total from step 2
2.4
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
2.5
2.6
Step 5: Create a relevance score
To determine the extent to which each output is of interest a score is calculated as follows score = (proportion of total-parml)/parm2
This is illustrated below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX B
Store 105 is most relevant per both module 2.5 and 2.6.
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
V02 - when correcting for differing numbers of stores
Intelligence 3: Input Data
Data to be sourced from Omni & Uber Cubes
For Sales metrics , use the Omni cubes where possible. The number of stores always have to be pulled from the Uber cubes though.
Source 3.1: Number of Stores & Sales by Product Sets
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Source 2.2: Total Number of Stores & Total Sales
Intelligence 3.1
What is the top selling?
Step 1
Take data from source 3.1 as outlined below
Step 2
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 3
Sort by Sales in Defined Time Period for Defined Retailer Group and Defined Segment descending and display top n
Intelligence 3.2
What Sells More than expected in/to (region / segment)
Step 1
Take data from source 3.1 as outlined below
Step 2
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 3
Calculate Defined Total Sales by Store & Comparison Total Sales by Store
Step 4
Add the Defined Total Sales & Comparison Total Sales by Store to the data from step 2
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Step 4
Calculate the Proportion of Comparative Sales by Store (= Comparative Sales by Store / Comparative Total Sales by Store)
Calculate Expected Defined Sales by Store (= Defined Total Sales by Store * Proportion of Compartive Sales by Store)
Calculate Defined Difference by Store (= Defined Sales by Store - Expected Defined Sales by Store) Calculate Defined Difference (= Defined Differnce by Store * Number of Stores in Defined Time Period)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 5
Sort by Defined Difference
Intelligence 3.2
What has growth the most
Step 1
Take data from source 3.1 as outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Step 2
Calculate Defined Sales by Store & Comparative Store by Sales
Step 3
Calculate Absolute Growth (= Sales in Defined Time Period - Sales in Comparative Time Period) Calculate Relative Growth (= Absolute Growth / Sales in Comparative Time Period)
Calculate Relative Growth by Store (= (Defined Sales by Store - Comparative Sales by Store) / Comparative Sales by Store)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 5
Sort by Absolute Growth and display the 5 metrics outlined below
Intelligence 3: Relevance Scores
Step 1
Take the following metrics as well as the Total Sales for the Defined Time Period from source 3.2 & calculate relative metrics
3.1
Sales in Defined Time Period by Product Set
3.2
Defined Difference by Product Set
3.3
Absolute Growth by Product Set
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!! !! ! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores (sorted by Intelligance)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Sorted by Relevance Scores
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
V04 - when not correcting for differing numbers of stores
Intelligence 3.2
What Sells More than expected in/to (region / segment)
Step 1
Take data from source 3.1 as outlined below
Step 2
Calculate Defined Total Sales by Store & Comparison Total Sales by Store
Step 3
Add the Defined Total Sales & Comparison Total Sales by Store to the data from step 2
Step 4
Calculate the Proportion of Comparative Sales (= Comparative Sales / Comparative Total Sales)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Calculate Expected Defined Sales (= Defined Total Sales * Proportion of Comparative Sales) Calculate Defined Difference (= Defined Sales - Expected Defined Sales)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Step 5
Sort by Defined Difference
Intelligence 3.3
What has growth the most
Step 1
Take data from source 3.1 as outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Step 2
Take data from source 2.2 and calculate Expected Growth Rate (= Total Sales Defined Time Period / Total Sales Comparative Time Period)
Step 3
Add Expected Growth Rate to the data from step 1
Calculate Expected Sales Defined Time Period (= Expected Growth Rate * Sales in Comparative Time Period)
Calculate Difference (= Expected Sales Defined Time Period - Sales in Defined Time Period)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
Step 5
Sort by Difference & display top n
Intelligence 3: Relevance Scores
Step 1
Take the following metrics as well as the Total Sales for the Defined Time Period from source 3.2 & calculate relative metrics
3.1
Sales in Defined Time Period by Product Set
3.2
Defined Difference by Product Set
3.3
Absolute Growth by Product Set
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!! !! ! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores (sorted by intelligence)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Sorted by Relevance Score
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Intelligence 3: Relevance Scores (corrected)
Step 1
Take the following metrics as well as the Total Sales for the Defined Time Period from source 3.2 & calculate relative metrics
3.1
Sales in Defined Time Period by Product Set
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
3.2
Defined Difference by Product Set
3.3
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Absolute Growth by Product Set
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
Step 3
Calculate Relevance Scores (sorted by intelligence)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
Sorted by Relevance Score
SUBSTITUTE SHEET (RULE 26)
APPENDIX C
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
Intelligence 4: Input Data
Data to be sourced from Day and Time cubes (TXN_UPC_TIMEOFDAY_DAILY_AGGR &
TXN UPC TIMEOFDAY AGGR) on SQL server
Source 4.1: Total Sales by calendar day of month from TXN UPC TIMEOFDAY DAILY AGGR
Aggregate tot net amt from TXN UPC TIMEOFDAY DAILY AGGR to return detailed information for each calendar day for the Defined and Comparative Product Sets
Add a column for day of month (highlighted in yellow) by extracting the day from txn_dt
Please note that Standard Product Exclusions & Standard Store Exclusions need to be applied when aggregating
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Source 4.2: Total Sales by Retailer Day of Week from TXN UPC TIMEOFDAY AGGR
Aggregate tot net a t from TXN UPC TIMEOFDAY AGGR to return detailed information for each Retailer Day of Week (ret dow nbr) for the Defined and Comparative Product Sets
Please note that Standard Product Exclusions & Standard Store Exclusions need to be applied when aggregating
Source 4.3: Total Sales by hour from TXN UPC TIMEOFDAY AGGR
Aggregate TXN UPC TIMEOFDAY AGGR to return detailed information for each hour for the Defined and Comparative Product Sets
Please note that Standard Product Exclusions & Standard Store Exclusions need to be applied when aggregating
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Intelligence 4.1
What calendar day of the month do I sell most?
Step 1
Take data from source 4.1 for the Defined Product Set as outlined below
Step 2
Intelligence 4.2
What calendar day of the month did under-/overperform the most?
Step 1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 2
Step 3
For the Comparative Product Set, calculate the Expected Proportion of Total Sales by calendar day of month
Step 4
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Attach the Expected Proportion of Total Sales by calendar day of month from step 3 and the Defined Product Set Total Sales from step 2 to Total Sales by calendar day of month for the Defined Product Set from step 1
Step 5
Calculate Expected Total Sales for the Defined Product Set and add the Difference between Total Sales and the Expected Total Sales for the Defined Product Set
Awhy is this here? It's not necessary, it's not relevant, and it's... not even right??? I think??? It just confuses things.
Intelligence 4.3
What Retailer Day of Week did I sell the most?
Step 1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 2
Intelligence 4.4
What Retailer Day of Week did under-/overperform the most?
Step 1
Take data from source 4.2 as outlined below
Step 2
Step 3
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
For the Comparative Product Set, calculate the Expected Proportion of Total Sales by Retailer Day of Week
Step 4
Attach the Expected Proportion of Total Sales by Retailer Day of Week from step 3 and the Defined Product Set Total Sales from step 2 to Total Sales by Retailer Day of Week for the Defined Product Set from step 1
Step 5
Calculate Expected Total Sales for the Defined Product Set and add the Difference between Total Sales and the Expected Total Sales for the Defined Product Set
Intelligence 4.5
What time of the day do I sell most?
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 1
Take data from source 4.3 for the Defined Product Set as outlined below
Step 2
Intelligence 4.6
What time of the day did under-/overperform the most?
Step 1
Take data from source 4.3 as outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 2
Calculate Total Sales for Defined and Comparative Product Set
Step 3
For the Comparative Product Set, calculate the Expected Proportion of Total Sales by Hour
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 4
Attach the Expected Proportion of Total Sales by Hour from step 3 and the Defined Product Set Total Sales from step 2 to Total Sales by Hour for the Defined Product Set from step 1
Step 5
Calculate Expected Total Sales for the Defined Product Set and add the Difference between Total Sales and the Expected Total Sales for the Defined Product Set
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Intelligence 4.6 -Corrected
What time of the day did under-/overperform the most?
Step 1
Take data from source 4.3 as outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 2
Calculate Total Sales for Defined and Comparative Product Set
Step 3
For the Comparative Product Set, calculate the Expected Proportion of Total Sales by Hour
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Step 4
Attach the Expected Proportion of Total Sales by Hour from step 3 and the Defined Product Set Total Sales from step 2 to Total Sales by Hour for the Defined Product Set from step 1
Step 5
Calculate Expected Total Sales for the Defined Product Set and add the Difference between Total Sales and the Expected Total Sales for the Defined Product Set
SU BSTITUTE SH EET (RU LE 26)
APPENDIX D
Intelligence 4: Relevance Scores -corrected
Step 1
Take the Product Set Total Sales as well as Differences from each of the Intelligence elements & calculate Relative Metrics
4.1
4.2
.5
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
.6
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!!!!! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores for all Relative Differences
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
Step 4: Sort to highlight the most extreme cases
Sorting within modules:
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
Sorting heedless of module:
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX D
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
Intelligence 5: Input Data
Source 5.1
TXN UPC BASKET INTERACTIONS
Source 5.2
TXN UPC HHD INTERACTIONS
Source 5.3
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Source 5.4
TXN UPCSET HHD INTERACTIONS
Intelligence 5.1
What other products are in the same basket as my product?
Step 1
Take the source data from 5.1 (if Product Analysis Level = UPC) or 5.3 (if Product Analysis Level = Category) as outlined below
Step 2
Calculate % of purchase together (= contains_both_item_qty / (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty))
Calculate Correlation (= ((contains_both_item_qty * contains_non_item_qty) - (target_only_item_qty * comparison_item_qty)) / squareroot ((contains_both_item_qty +
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E target_only_item_qty) * (target_only_item_qty + contains_none_item_qty) *
(contains_both_item_qty + comparison_only_item_qty) * (target_onlyJtem_qty + contains_none_item_qty))
Step 3
Sort by the % of purchase together descending and select top N to display
Intelligence 5.2
What other products do my customers buy?
Step 1
Take the source data from 5.2 (if Product Analysis Level = UPC) or 5.4 (if Product Analysis Level = Category) as outlined below
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Step 2
Calculate % of purchase together (= contains_both_item_qty / (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty))
Calculate Correlation (= ((contains_both_item_qty * contains_non_item_qty) - (target_only_item_qty * comparison_item_qty)) / squareroot ((contains_both_item_qty + target_only_item_qty) * (target_only_item_qty + contains_none_item_qty) *
(contains_both_item_qty + comparison_only_item_qty) * (target_only_item_qty + contains_none_item_qty))
Step 3
Sort by the % of purchase together descending and select top N to display
Intelligence 5.2
What other products do my customers buy?
Step 1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Take the source data from 5.2 (if Product Analysis Level = UPC) or 5.4 (if Product Analysis Level = Category) as outlined below
Step 2
Calculate % of purchase together (= contains_both_item_qty / (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty))
Calculate Correlation (= ((contains_both_item_qty * contains_non_item_qty) - (target_only_item_qty * comparison_item_qty)) / squareroot ((contains_both_item_qty + target_only_item_qty) * (target_only_item_qty + contains_none_item_qty) *
(contains_both_item_qty + comparison_only_item_qty) * (target_only_item_qty + contains_none_item_qty))
Step 3
Sort by the % of purchase together descending and select top N to display
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Intelligence 5.3
What products are complementary to mine?
Step 1
Take the source data from 5.2 (if Product Analysis Level = UPC) or 5.4 (if Product Analysis Level = Category) as outlined below
Step 2
Calculate Totals across all target_upc_ids (target_upcset_ids) and comparison_upc_ids
(comparison_upcset_ids)
Calculate % Purchased Together Total (= contains_both_item_qty / (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty)
Step 3
Calculate Expected contains_both_item_qty (= (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty) * % Purchased Total)
Calculate Difference = contains_both_item_qty - Expected contains_both_item_qty
Calculate % Purchased Together (= same_hhds_different_basket / (same_hhds_different_basket + target_only_item_qty + comparison_only_item_qty))
Calculate Complementary Index (= % Purchased Together / % Purchased Together Total)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Step 4
Sort by Complementary Index descending and select top n
Round expected contains_both_item_qty & difference to 0 decimals
Display the following metrics:
Intelligence 5.4 -corrected
What products do customers buy instead of my products?
Step 1
Take data from source 5.2 (if Product Analysis Level = UPC) or 5.4 (if Product Analysis Level = Category)
Append column contains_both_item_qty as contains_both_item_qty_basket from source 5.1 (if Product Analysis Level = UPC) or 5.3 (if Product Analysis Level = Category) as highlighted in yellow
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Step 2
Calculate same_hhds_different_basket (= contains_both_item_qty - contains_both_item_qty_basket) answerting the question: How many items were sold to the same household but not within the same basket?
Approportion the amount of items that were bought by the same customer in the same basket to target_only_item_qty and comparison_only_item_qty (& round those to 0 decimals):
target_only_item_qty_new = target_only_item_qty + (target_only_item_qty /
(target_only_item_qty + comparison_only_item_qty) *
contains_both_item_qty_basket)comparison_only_item_qty_new =
comparison_only_item_qty + (comparison_only_item_qty / (target_only_item_qty + comparison_only_item_qty) * contains_both_item_qty_basket)
Step 3
Calculate Totals across all target_upc_ids (target_upcset_ids) and comparison_upc_ids
(comparison_upcset_ids)
Calculate % Purchased Together (= same_hhds_different_basket / (same_hhds_different_basket + target_only_item_qty + comparison_only_item_qty))
Step 3
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
Calculate Expected contains_both_item_qty (= (contains_both_item_qty + target_only_item_qty + comparison_only_item_qty) * % Purchased Total)
Calculate Difference (= contains_both_item_qty - Expected contains_both_item_qty)
Calculate % Purchased Together (= same_hhds_different_basket / (same_hhds_different_basket + target_only_item_qty + comparison_only_item_qty))
Calculate Substitutional Index (= % Purchased Together / % Purchased Together Total
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
Intelligence 5: Relevance Scores
Step 1
Take the following metrics from sub-modules 5.1 to 5.4
5.1
5.2
5.3
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
5.4
...wtf. Where did that come from?
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!!!!! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores
Sorted by Relevance Score
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
Intelligence 5: Relevance Scores (as corrected)
Step 1
Take the following metrics from sub-modules 5.1 to 5.4
5.1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX E
5.2
5.3
5.4
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!!!!! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
Sorted by Relevance
SUBSTITUTE SHEET (RULE 26)
APPENDIX E
SUBSTITUTE SHEET (RULE 26)
APPENDIX F
SUBSTITUTE SHEET (RULE 26)
APPENDIX F
SUBSTITUTE SHEET (RULE 26)
APPENDIX F
SUBSTITUTE SHEET (RULE 26)
APPENDIX F
SUBSTITUTE SHEET (RULE 26)
APPENDIX F
Intelligence 6
Input Data
Data to be sourced from Uber Cube
Source 6.1
Trial Sales and Total Sales by week for Defined Product Set, Defined Time Frame & Defined Retailer Groups/Stores (if supplied)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX F
Source 6.2
Trial Sales and Total Sales by week for Comparative Product Set (in this example Total Store), Defined Time Frame & Defined Retailer Groups/Stores (if supplied)
Intelligence 6.1
Is my product generating trial or repeat purchase?
Step 1
Take data from source 6.1 as outlined below
Step 2
Aggregate data to Defined Product Set level
Step 3
Calculate Trial Sales Rate for Defined Product Set (= trial_item_net_amt / total_net_amt)
Calculate Repeat Sales Rate for Defined Product Set (= 1 - Trail Sales Rate for Defined Product Set)
Step 4
Take data from source 6.2 as outlined below
Step 5
Step 6
Calculate Trial Sales Rate for Defined Product Set (= trial_item_net_amt / total_net_amt)
Calculate Repeat Sales Rate for Defined Product Set (= 1 - Trail Sales Rate for Defined Product Set)
Step 7
Present Defined Product Set Trial Sales Rate, Defined Product Set Repeat Sales Rate, Comparative Product Set Trial Sales Rate & Comparative Product Set Repeat Sales Rate as Percentage rounded to 2 decimals
SU BSTITUTE SH EET (RU LE 26)
APPENDIX F
Intelligence 6.2
How does Trial & Repeat Sales Rate differ by (Defined Product Analysis Level, Defined Time Analysis Level, Defined Store Analysis Level)
Step 1
Take data from source 6.1 as outlined below
Step 2
Attach Defined Aggregation Level (either Defined Product Analysis Level, Defined Time Analysis Level, or Defined Store Analysis Level depending on the user selections), in this example Region
Step 3
Aggregate date to Aggregation Level (here Region)
SU BSTITUTE SH EET (RU LE 26)
APPENDIX F
Step 4
Calculate Trial Sales Rate for Defined Product Set (= trial_item_net_amt / total_net_amt)
Calculate Repeat Sales Rate for Defined Product Set (= 1 - Trail Sales Rate for Defined Product Set)
Step 5
Present Trial Sales Rates & Repeat Sales Rate as Percentage rounded to 2 decimals
Intelligence 6
Relevance Scores
Step 1
Take the Trial Sales Rates from INT 6.1 and 6.2
Step 2
Get distribution parameters for Intelligence sub-modules from the Retailer-specific "Standardisation" table
NOTE!!!!! THOSE WILL NOT BE THE FINAL PARAMETERS FOR ABS
Step 3
Calculate Relevance Scores
SUBSTITUTE SHEET (RULE 26)
APPENDIX G
SUBSTITUTE SHEET (RULE 26)
APPENDIX G
SUBSTITUTE SHEET (RULE 26)
APPENDIX G
SUBSTITUTE SHEET (RULE 26)
APPENDIX G
Intelligence 7.1.1
What are my Carded Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt from
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturer_id = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (Ύ, 'C') and week num = 201736
Step 2
Calculate Carded Sales Growth Rate = (Carded Sales Defined Time Period - Carded Sales Comparison Time Period) / Carded Sales Comparison Time Period and round to one decimal
Intelligence 7.1.2
How did my Carded Sales grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt from
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Carded Sales Growth Rate = (Carded Sales Defined Time Period - Carded Sales Comparison Time Period) / Carded Sales Comparison Time Period and round to one decimal
Intelligence 7.1.3
How did the Growth in Carded Sales contribute to the Growth in Total Sales?
Step 1
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt from
[OM N DOL] .[TST_RETAI LERGROU P_SEGAGG R_TI M EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (Ύ, 'C') and week num = 201736
Step 2
Calculate Carded Sales Contribution = Carded Sales Defined Time Period - Carded Sales Comparison Time Period and round to integer
Intelligence 7.2.1
What are my Non-Carded Sales?
Step 1
Pull data
Carded Sales
select Ixjd, total_carded_net_amt from [0 M N DO L] .[TST_RETAI LERG ROU P_SEGAGG R_TI M E PERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and
Total Sales
select Ixjd, total_carded_net_amt from [O M N DO L] .[TST_RETAI LERG ROU P_SEGAGGR_TI M E PERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 99 and
Step 2
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Intelligence 7.2.2
How did my Non-Carded Sales grow?
Step 1
Pull data
Carded Sales
select Ixjd, total_carded_net_amt from [OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIM EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C, 'U') and week_num = 201736
Total Sales
select Ixjd, total_carded_net_amt from [OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where lx_id = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 99 and retailer_group_id = 0 and timeleveljnd in (Ύ, 'C') and week_num = 201736
Step 2
Calculate Non-Carded Sales Growth Rate = ((Total Sales Defined Time Period - Carded Sales Defined Time Period) - (Total Sales Comparison Time Period - Carded Sales Comparison Time Period)) / (Total Sales Comparison Time Period - Carded Sales Comparison Time Period) and round to one decimal
Intelligence 7.2.3
How did the Growth in Carded Sales contribute to the Growth in Total Sales?
Step 1
Pull data
Carded Sales
select Ixjd, total_carded_net_amt from [OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and and timeleveljnd in ('C, Ύ) and week_num = 201736
Total Sales
select Ixjd, total_carded_net_amt from [0 M N DO L] .[TST_RETAI LERG ROU P_SEGAGGR_TI M E PERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturer_id = 0 and seg_group_id = 99 and
Step 2
Calculate Non-Carded Sales Contribution = (Total Sales Defined Time Period - Carded Sales Defined Time Period) - (Total Sales Comparison Time Period - Carded Sales Comparison Time Period) and round to integer
Intelligence 7.3.1
How many Households buy my product?
Step 1
Pull data and round to integer
select Ixjd, total_hhds_num from [OM N DOL] .[TST_RETAI LERGROU P_SEGAGG R_TI M EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_groupJd = 0 and timelevel ind = T and week num = 201736
Intelligence 7.3.2
How did my Households grow?
Step 1
Pull data
select Ixjd timeleveljnd total_hhds_numfrom
[OM N DOL] .[TST_RETAI LERGROU P_SEGAGG R_TI M EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Household Growth Rate = (Households Defined Time Period - Households Comparison Time Period) / Households Comparison Time Period and round to one decimal
Intelligence 7.3.3
How did the Growth in Households contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_numfrom select lx_id, timeleveljnd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Household Contribution = Cardes Sales Comparison Time Period * Households Growth Rate * (1 + Spend per Household Growth Rate / 2) and round to integer
Intelligence 7.4.1
What is the Spend per Household for my product?
Step 1
Pull data
select Ixjd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_groupJd = 0 and timeleveljnd = 'Y'and week num = 201736
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Step 2
Intelligence 7.4.2
How did the Spend per Household grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Spend per Household Growth Rate = ((Carded Sales Defined Time Period / Households Defined Time Period) - (Carded Sales Comparison Time Period / Households Comparison Time Period)) / (Carded Sales Comparison Time Period / Households Comparison Time Period)) and round to one decimal
Intelligence 7.4.3
How did the Growth in Spend per Household contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Calculate Spend per Household Contribution = Cardes Sales Comparison Time Period * Spend per Household Growth Rate * (1 + Household Growth Rate / 2)
Intelligence 7.5.1
What is the Household Penetration for my product?
Step 1
Pull data
Households for Defined Product Set
select Ixjd, total_hhds_numfrom [0 M N DO L].[TST_RETAILERGROUP_SEGAGGR_TI ME PERIOD] where lx_id = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timelevel ind = 'Y'and week num = 201736
Retailer Households
select Ixjd, total_hhds_numfrom [OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIM EPERIOD] where Ixjd = Oand brandjd = 0 and manufacturerjd = 0 and seg_groupJd = 99 and retailer_group_id = 0 and timeleveljnd = 'Y'and week_num = 201736
Note: Retailer H H are ALWAYS all Households that shopped across the Total Store, no matter what the Defined Product Set, Defined Retailer Group or Defined Customer Segment are! I !
Step 2
Intelligence 7.5.2
How did the Household Penetration for my product grow?
Step 1
Pull data
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Households for Defined Product Set
select Ixjd, timeleveljnd, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIM EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timelevel_ind in ('C' Y')and week num = 201736
Retailer Households
select Ixjd, timeleveljnd, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = Oand brandjd = 0 and manufacturerjd = 0 and seg_group_id = 99 and retailer_groupJd = 0 and timeleveljnd in ('C','Y')and week_num = 201736
Note: Retailer H H are ALWAYS all Households that shopped across the Total Store, no matter what the Defined Product Set, Defined Retailer Group or Defined Customer Segment are! I !
Step 2
Calculate Household Penetration Growth Rate = ((Households Defined Time Period / Retailer
Households Defined Time Period) - (Household Comparison Time Period / Retailer Household
Comparison Time Period)) / (Household Comparison Time Period / Retailer Household Comparison Time Period)) and round to one decimal
Intelligence 7.5.3
How did the Growth in Household Penetration contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C','Y')and week num = 201736
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Retailer Households
select Ixjd, timeleveljnd, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = Oand brandjd = 0 and manufacturerjd = 0 and seg_group_id = 99 and retailer_group_id = 0 and timeleveljnd in ('C' Y')and week_num = 201736
Note: Retailer H H are ALWAYS all Households that shopped across the Total Store, no matter what the Defined Product Set, Defined Retailer Group or Defined Customer Segment are! I !
Step 2
Calculate Household Penetration Contribution = Carded Sales Comparison Time Period * Household Penetration Growth Rate * (1 + (Retailer Household Growth Rate / 2)) * (1 + (Spend per Household Growth Rate / 2)) and round to integer
Intelligence 7.6.1
What are my Retailer Households?
Step 1
Pull data
select Ixjd, total_hhds_numfrom [0 M N DO L].[TST_RETAILERGROUP_SEGAGGR_TI ME PERIOD] where Ixjd = Oand brandjd = 0 and manufacturerjd = 0 and seg_groupJd = 99 and retailer_group_id = 0 and timelevel ind = 'Y'and week num = 201736
Intelligence 7.6.2
How did my Retailer Households grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = Oand brandjd = 0 and
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G manufacturerjd = 0 and seg_group_id = 99 and retailer_group_id = 0 and timeleveljnd in ('C','Y')and week_num = 201736
Note: Retailer Households are ALWAYS all Households that shopped across the Total Store, no matter what the Defined Product Set, Defined Retailer Group or Defined Customer Segment are ! ! I
Step 2
Calculate Retailer Household Growth Rate = (Retailer Households Defined Time Period - Retailer Household Comparison Time Period) / Retailer Household Comparison Time Period and round to one decimal
Intelligence 7.6.3
How did the Growth in Retailer Households contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C','Y')and week num = 201736
Retailer Households
select Ixjd, timeleveljnd, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = Oand brandjd = 0 and manufacturerjd = 0 and seg_group_id = 99 and retailer_groupJd = 0 and timeleveljnd in ('C','Y')and week_num = 201736
Note: Retailer Households are ALWAYS all Households that shopped across the Total Store, no matter what the Defined Product Set, Defined Retailer Group or Defined Customer Segment are ! ! !
Step 2
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Calculate Retailer Household Contribution = Carded Sales Comparison Time Period * Retailer Household Growth Rate * (1 + (Household Penetration Growth Rate / 2)) * (1 + (Spend per Household Growth Rate / 2)) and round to integer
Intelligence 7.7.1
What is the Average Basket Size for my product?
Step 1
Pull data
select Ixjd, total_carded_net_amt, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd = 'Y'and week num = 201736
Step 2
Intelligence 7.7.2
How did the Average Basket Size grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Average Basket Size Growth Rate = ((Carded Sales Defined Time Period / Carded Baskets Defined Time Period) - (Carded Sales Comparison Time Period / Carded Baskets Comparison Time
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Period)) / (Carded Sales Comparison Time Period / Carded Baskets Comparison Time Period)) and round to one decimal
Intelligence 7.7.3
How did the Growth in Average Basket Size contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_num, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C'/Y')
Step 2
Calculate Average Basket Size Contribution = Carded Sales Comparison Time Period * Average Basket Size Growth Rate * (1 + (Frequency Growth Rate / 2)) * (1 + (Households Growth Rate / 2)) and round to integer
Intelligence 7.8.1
What is the Frequency for my product?
Step 1
Pull data
select Ixjd, total_carded Jxns_num, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd = 'Y'and week num = 201736
Step 2
Calculate Frequency = Carded Baskets / Households and round to two decimals
Intelligence 7.8.2
How did the Frequency grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded Jxns_num, total_hhds_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (Ύ, 'C') and week num = 201736
Step 2
Calculate Frequency Growth Rate = ((Carded Baskets Defined Time Period / Households Defined Time Period) - (Carded Baskets Comparison Time Period / Households Comparison Time Period)) / (Carded Baskets Comparison Time Period / Households Comparison Time Period)) and round to one decimal
Intelligence 7.8.3
How did the Growth in Frequency contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_hhds_num, total_carded Jxns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C','Y')
Step 2
Calculate Frequency Contribution = Carded Sales Comparison Time Period * Frequency Growth Rate * (1 + (Average Basket Size Growth Rate / 2)) * (1 + (Households Growth Rate / 2)) and round to integer
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
Intelligence 7.9.1
What is the Units per Basket for my product?
Step 1
Pull data
select Ixjd, total_cardedJtem_qty, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timelevel_ind = 'Y'and week num = 201736
Step 2
Intelligence 7.9.2
How did the Units per Basket grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_cardedJtem_qty, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Units per Basket Growth Rate = ((Carded Units Defined Time Period / Carded Baskets Defined Time Period) - (Carded Units Comparison Time Period / Carded Units Comparison Time Period)) / (Carded Units Comparison Time Period / Carded Baskets Comparison Time Period)) and round to one decimal
Intelligence 7.9.3
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G
How did the Growth in Units per Basket contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_cardedJtem_qty, total_hhds_num, tota l_ca rded_txns_n u mfrom [OM NDOL].[TST_RETAILERGROUP_SEGAGGR_TIM EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timelevel_ind in ('C','Y')and week_num = 201736
Step 2
Calculate Units per Basket Contribution = Carded Sales Comparison Time Period * Units per Basket Growth Rate * (1 + (Average Item Price Growth Rate / 2)) * (1 + (Household Growth Rate / 2)) * (1 + (Frequency Growth Rate / 2)) and round to integer
Intelligence 7.10.1
What is the Average Item Price for my product?
Step 1
Pull data
select Ixjd, total_carded_net_amt, total_carded_item_qtyfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd = 'Y'and week num = 201736
Step 2
Intelligence 7.10.2
How did the Average Item Price grow?
Step 1
Pull data
select Ixjd, timeleveljnd, total_cardedJtem_qty, total_carded_txns_numfrom
[OMNDOL].[TST_RETAILERGROUP_SEGAGGR_TIMEPERIOD] where Ixjd = 10129 and brandjd = 0 and
SU BSTITUTE SH EET (RU LE 26)
APPENDIX G manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in (T, 'C') and week num = 201736
Step 2
Calculate Average Item Price Growth Rate = ((Carded Sales Defined Time Period / Carded Units Defined Time Period) - (Carded Sales Comparison Time Period / Carded Units Comparison Time Period)) / (Carded Sales Comparison Time Period / Carded Units Comparison Time Period)) and round to one decimal
Intelligence 7.10.3
How did the Growth in Average Item Price contribute to the Growth in Total Sales?
Step 1
Pull data
select Ixjd, timeleveljnd, total_carded_net_amt, total_carded_item_qty, total_hhds_num, tota l_ca rded xns_n u mfrom [OM NDOL].[TST_RETAILERGROUP_SEGAGGR_TIM EPERIOD] where Ixjd = 10129 and brandjd = 0 and manufacturerjd = 0 and seg_group_id = 0 and retailer_group_id = 0 and timeleveljnd in ('C','Y')and week_num = 201736
Step 2
Calculate Average Item Price Contribution = Carded Sales Comparison Time Period * Average Item Price Growth Rate * (1 + (Units per Basket Growth Rate / 2)) * (1 + (Household Growth Rate / 2)) * (1 + (Frequency Growth Rate / 2)) and round to integer
SU BSTITUTE SH EET (RU LE 26)
APPENDIX H
TXNJTM - Segmented by Store ID, UPC ID
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
CT/US18/6 wo20i9/ii8927'ch 2019 (04.03.2019) PCT/US2018/065846
Replacement Sheet
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
CT/US18/6 wo20i9/ii8927'ch 2019 (04.03.2019) PCT/US2018/065846
Replacement Sheet
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
Replacement Sheet
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
APPENDIX H
SUBSTITUTE SHEET (RULE 26)
Claims
1. A method for processing queries into a large database of transactions, comprising the actions of:
receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module;
applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the multiple further-analysis modules on the transaction data;
allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
2. The method of Claim 1, wherein the query can be a natural-language
query; and further comprising the initial step of parsing the natural- language query.
3. The method of Claim 1, further comprising the subsequent step of
displaying an immersive environment to the user to represent the output of at least one further-analysis module.
SU BSTITUTE SH EET (RU LE 26)
4. A method for processing queries into a large database of transactions, comprising the actions of:
receiving and parsing a natural language query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module;
applying the lead analysis module to a large set of transaction data to thereby provide an initial output, and also providing a ranking of multiple further-analysis modules, while also running the multiple further-analysis modules on the transaction data;
allowing the user to select at least one of the further-analysis modules, and
displaying the results from the selected further-analysis module to the user with an immersive environment, in which items relevant to the query are made conspicuous.
5. The method of Claim 4, wherein the immersive environment corresponds to a view of products displayed for sale in a physical retail location.
SU BSTITUTE SH EET (RU LE 26)
6. A method for processing queries into a large database of transactions, comprising the actions of:
receiving a query from a user, and accordingly selecting one of a predetermined set of analysis modules to be a lead analysis module;
applying the lead analysis module to transaction data to thereby provide an initial output, and also providing a ranking of multiple further- analysis modules, while also running the multiple further-analysis modules on the transaction data;
wherein at least one said analysis module operates not only on transactional data, but also on customer data which is not derived from transactional data; and
allowing the user to select at least one of the further-analysis modules, and providing a corresponding output to the user.
7. The method of Claim 6, wherein the query can be a natural-language
query; and further comprising the initial step of parsing the natural- language query.
8. The method of Claim 6, further comprising the subsequent step of
displaying an immersive environment to the user to represent the output of at least one further-analysis module.
SU BSTITUTE SH EET (RU LE 26)
9. A method for processing queries into a large database of transactions, comprising the actions of:
receiving and parsing a natural language query from a user, and accessing a database of transactions to thereby produce an answer to the query; and
displaying an immersive environment to the user, in which objects relevant to the query are made conspicuous.
10. The method of Claim 9, wherein the immersive environment
corresponds to a view of products displayed for sale in a physical retail location.
SU BSTITUTE SH EET (RU LE 26)
11. A method for processing queries into a large database of transactions, comprising the actions of:
when a user inputs a natural-language query into the front-end interface, natural language processing determines the intent and scope of the request, and passes the intent and scope to the analysis module; in the analysis module, the intent and scope are used to select a primary analysis engine;
from the intent and scope, the primary analysis engine determines what data cube(s) are relevant to the query at hand, and retrieves the appropriate fundamental data block(s);
the fundamental data block(s) are passed to the specific/secondary analysis engine(s);
in the specific/secondary analysis engine(s): the fundamental data block(s) are analyzed according to one or more sub-module metrics;
relevance scores are calculated for the subsequent result block(s); and, based on the relevance scores, the specific/secondary analysis engine determines which result block(s) are most important to the query at hand; one or more intelligence block(s) are populated based on the most important result block(s), and the intelligence block(s) are passed back to the primary analysis engine; the primary analysis module then returns the fundamental and intelligence blocks; the fundamental and intelligence blocks are then passed back out of the analysis module, whereupon the fundamental and intelligence blocks are translated into natural language results, visualizations, and/or other means of usefully
SU BSTITUTE SH EET (RU LE 26)
conveying information to the user, as appropriate; and the translated results are displayed to the user.
SUBSTITUTE SHEET (RULE 26)
12. The method of Claim 1, 2, 3, 4. 5, 6, 7, 8, 9, 10, 11 wherein at least one said analysis module operates not only on transactional data, but also on exogenous data which is not derived from transactional data.
13. The method of Claim 1, 2, 3, 4. 5, 6, 7, 8, 9, 10, 11, wherein at least one said analysis module operates not only on transactional data, but also on customer data which is not derived from transactional data.
14. The method of Claim 1, 2, 3, 4. 5, 6, 7, 8, 9, 10, 11, wherein at least one said analysis module operates not only on transactional data, but also on exogenous data at multiple levels of aggregation.
15. A system which implements the actions of Claim 1, 2, 3, 4. 5, 6, 7, 8, 9,
10, 11 in near-real time, while drawing on pre-aggregated views of current transactional data to support the analysis modules.
SU BSTITUTE SH EET (RU LE 26)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762598644P | 2017-12-14 | 2017-12-14 | |
US62/598,644 | 2017-12-14 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2019118927A2 true WO2019118927A2 (en) | 2019-06-20 |
WO2019118927A3 WO2019118927A3 (en) | 2019-08-08 |
Family
ID=66819030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/065846 WO2019118927A2 (en) | 2017-12-14 | 2018-12-14 | Conversational intelligence architecture system |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019118927A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230128497A1 (en) * | 2021-10-22 | 2023-04-27 | Tekion Corp | Machine learning-implemented chat bot database query system for multi-format database queries |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8412514B1 (en) * | 2005-10-27 | 2013-04-02 | At&T Intellectual Property Ii, L.P. | Method and apparatus for compiling and querying a QA database |
WO2013142493A1 (en) * | 2012-03-19 | 2013-09-26 | Mayo Foundation For Medical Education And Research | Analyzing and answering questions |
US8935277B2 (en) * | 2012-03-30 | 2015-01-13 | Sap Se | Context-aware question answering system |
US9471601B2 (en) * | 2014-03-25 | 2016-10-18 | International Business Machines Corporation | Images for a question answering system |
US10303798B2 (en) * | 2014-12-18 | 2019-05-28 | Nuance Communications, Inc. | Question answering from structured and unstructured data sources |
-
2018
- 2018-12-14 WO PCT/US2018/065846 patent/WO2019118927A2/en active Application Filing
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230128497A1 (en) * | 2021-10-22 | 2023-04-27 | Tekion Corp | Machine learning-implemented chat bot database query system for multi-format database queries |
US12105704B2 (en) * | 2021-10-22 | 2024-10-01 | Tekion Corp | Machine learning-implemented chat bot database query system for multi-format database queries |
US20240411755A1 (en) * | 2021-10-22 | 2024-12-12 | Tekion Corp | Machine learning-implemented chat bot database query system for multi-format database queries |
Also Published As
Publication number | Publication date |
---|---|
WO2019118927A3 (en) | 2019-08-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20190197605A1 (en) | Conversational intelligence architecture system | |
CN112837118B (en) | Commodity recommendation method and device for enterprise users | |
US9703828B2 (en) | System and method for idempotent interactive disparate object discovery, retrieval and display | |
US20070282668A1 (en) | Methods and systems for determining product price elasticity in a system for pricing retail products | |
US20080235076A1 (en) | Opportunity matrix for use with methods and systems for determining optimal pricing of retail products | |
CN106651418A (en) | Method of recommending add-on item for special offer when spending enough by e-business | |
US20230196235A1 (en) | Systems and methods for providing machine learning of business operations and generating recommendations or actionable insights | |
US20070282667A1 (en) | Methods and systems for determining optimal pricing for retail products | |
Tan et al. | Brand and stock-keeping-unit (SKU) assortments, assortment changes and category sales | |
US20240257162A1 (en) | Methods, systems, articles of manufacture, and apparatus to adjust market strategies | |
Hemalatha | Market basket analysis–a data mining application in Indian retailing | |
US11222039B2 (en) | Methods and systems for visual data manipulation | |
Teller et al. | Physical and digital market places–where marketing meets operations | |
CN104537553A (en) | Application of repeated negative sequence pattern in customer purchase behavior analysis | |
EP3400571A1 (en) | Consumer decision tree generation system | |
Ganesha et al. | Optimal Category Mix in Multi-Category Retailing-Insights from an Experiment | |
Prasad et al. | Big Data Analytics-In Retail Sector | |
WO2019118927A2 (en) | Conversational intelligence architecture system | |
US20080021765A1 (en) | Methods and systems for determining product cross-selling effects in a system for pricing retail products | |
US20080033788A1 (en) | Methods and systems for determining product cannibalization effects in a system for pricing retail products | |
Jayasundara | Prediction of marketing by the consumer analytics | |
CN106530000A (en) | Exclusive commodity sale method | |
JP2016081125A (en) | Merchandise proposal system and merchandise proposal method, and merchandise proposal program | |
US20240221010A1 (en) | System and method for predictive and scoring of future success of businesses and/or products | |
JP2001335121A (en) | Automatic article replenishing system, automatic article replenishing method and recording medium therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18887665 Country of ref document: EP Kind code of ref document: A2 |