WO2017223547A1 - Automated aggregated multivariate testing systems, methods, and processes - Google Patents

Automated aggregated multivariate testing systems, methods, and processes Download PDF

Info

Publication number
WO2017223547A1
WO2017223547A1 PCT/US2017/039161 US2017039161W WO2017223547A1 WO 2017223547 A1 WO2017223547 A1 WO 2017223547A1 US 2017039161 W US2017039161 W US 2017039161W WO 2017223547 A1 WO2017223547 A1 WO 2017223547A1
Authority
WO
WIPO (PCT)
Prior art keywords
example embodiment
block
data
testing
variables
Prior art date
Application number
PCT/US2017/039161
Other languages
French (fr)
Inventor
Hung Dinh VU
Peter Q. Nguyen
Original Assignee
Ad Exchange Group
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201662354415P priority Critical
Priority to US62/354,415 priority
Application filed by Ad Exchange Group filed Critical Ad Exchange Group
Publication of WO2017223547A1 publication Critical patent/WO2017223547A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce, e.g. shopping or e-commerce
    • G06Q30/02Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
    • G06Q30/0241Advertisement
    • G06Q30/0242Determination of advertisement effectiveness
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management, e.g. organising, planning, scheduling or allocating time, human or machine resources; Enterprise planning; Organisational models

Abstract

Systems, methods, and devices for automated split testing, including the introduction of more effective and efficient methods and processes for associated decision making process and the resulting implementations of optimizations involved in all forms of split testing, including in digital advertising campaigns.

Description

AUTOMATED AGGREGATED MULTIVARIATE TESTING SYSTEMS, METHODS,
AND PROCESSES
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application No. 62/354,415, filed June 24, 2016, and titled "AUTOMATED SPLIT TESTING METHOD AND PROCESS," the entire contents and disclosures of which are hereby incorporated by reference.
FIELD OF THE INVENTION
[0002] This disclosure describes improvements in the field of automated split testing. In particular, the embodiments described herein can introduce more effective and efficient systems, methods, and processes for decision making methods and processes associated with split testing. Additionally, they can provide benefits to various implementations of optimizations resulting from and involved with many different types of split testing, including digital advertising campaigns and others.
BACKGROUND OF THE INVENTION
[0003] Split testing is generally known as one or more methods or processes for conducting controlled, randomized experiments with the goal of improving results associated with a particular metric. When more than one variable or metric is involved, split testing is often referred to as multi-variate testing. Many common applications of split testing techniques are for use with respect to online marketing and related ecommerce. Many webpage or website owners desire improvements results based on user completion of registration forms and sign-up pages, responses to calls to action, ad click-through completion, payment page conversion, checkout flow, product bundling, active engagement with content, and others. As such, split testing can be an efficient and effective way for owners or analysts to improve various parts or features of a website by measuring activities according to various metrics and also with measurable activities associated with other sales funnels.
[0004] Digital advertising campaigns are one example of an implementation where split testing can be employed in order to optimize one or more desired results. By modifying stated quantifiable goals, constraints in one or more models, data collection, and others, variant designs in a split testing can produce winners and losers between controlled and variable designs. Examples of quantifiable goals can include: click-through rate, conversion rate, earnings per click, return on investment (ROI), corporate profitability, market share, and others. Model constraints can include variables related to cashflow or budget concerns, inventory availability, seasonality, credit line availability, sample size, system performance, and others. Data measurement and collection can include capturing or acquiring and then analyzing one or more of first party, second-party, or third party data. First party data can be consumer data or system administrator related data. Second party data can be data related to partners or client. Third party data can be data related to other outside parties. Thus, when properly designed and executed, split testing can identify improvements related to data-driven hypothesis that often provide improved performance, even when based on subjective model design choices.
[0005] From an online merchant or advertiser's perspective, split testing may create or perpetuate many challenges when dealing with one or more online campaigns for a single proprietor. However, when these processes and methods are applied at the level and scope of a marketplace or 3rd party networks that manage campaigns, the multi-variate testing and optimization challenge becomes a much bigger and more complex effort. This is true of many existing efforts to acquire customers or buying media and also efforts to match consumers to offers and offer designs. Therefore, it would be beneficial to the market and its operators to implement improved systems and methods including efficient and effective automated experimentation and scoring for specific goals and constraints, by adopting combinations of associated success metrics and variables.
SUMMARY
[0006] In various embodiments described herein, implementation of goal-based, data-driven or both goal-based and data-driven decision making can use one or more automated, on-going experimental frameworks. These frameworks may continue indefinitely until goals are met, desired data is derived, processes or iterations reach a particular threshold, or they are otherwise paused, suspended or stopped by preset constraints or manual intervention. Examples of constraints can include one or more of: time constraints, iterative thresholds, resource conservation or management, and others. Additionally, the features and concepts herein can be applied to multi- track decision making that applies multiple success metrics, including combinations of consumer demographics and offer designs. Further, the features and concepts herein can be applied to multivariate testing. Also, split testing is sometimes referred to as A/B testing or multivariate testing and each of these can be referred to interchangeably herein.
[0007] In general, the systems, methods, processes, devices, and features described herein can automate and improve any existing automated, iterative experimentation and split testing processes. Further, this can benefit many related or associated decision making processes. Additional benefits can be found in implementations that occur as a result of optimization systems and processes that are involved in many of the various types and forms of split testing.
[0008] As described with respect to the various disclosed embodiments, one or more new frameworks and associated taxonomies are introduced that can be utilized to perform improved win and loss analyses with more effective attribution. They can also be used to create and maintain one or more libraries or knowledge databases that store best-practices results. These can be associated with demographics and behaviors of various groups or individuals including: visitors, consumers, advertisement exchanges and others. Once derived, these can then be used to drive decisions for associated patterns in design, marketing, re-marketing, product grouping, price points or others. Additional decisions that can be driven include product or industry line types, market verticals. Further in some embodiments they can be used to drive specific parts, components, features, or other related portions of banner ads, sales pages, pre-sell pages, native content, and other pages.
[0009] Currently, there are two commonly perceived problems in the various markets and industries which split testing is implemented. One example industry is that of digital advertising and its various sectors.
[0010] First, many markets and industries, including some sectors of digital advertising, have access to a variety of market tools that are able to quickly design a small number of campaign tests for a proprietor with one or few concurrent campaigns. These existing market tools have many manually implemented processes that schedule, monitor, analyze, and determine when to perform additional actions, including one or more of: pausing, stopping, revising, adding or performing additional test rounds, and others. However, there are currently no tools available that automate decision making protocols and effectively integrate with a Software as a Service ("SaaS") advertisement service platform. These service platforms often host a multitude of advertisers and an even larger number of campaigns. [0011] As a result of these deficiencies, advertising campaign operators who wish to perform split testing on more than a limited number of campaigns may become frustrated with the often laborious, costly, and mistake-prone testing efforts that are currently available. As such, they may avoid otherwise lucrative campaigns in favor of those that are less lucrative as a result of choosing a limited number of campaigns to pursue. This can lead to failure in the form of monetary losses, opportunity losses, or even combinations of both. These campaign operators might be fortunate to achieve a modest degree of success by performing testing methods that are primarily performed manually. However, they might be even more likely to receive negative results by improvising or otherwise operating blindly and hoping for the best. Limited testing efforts often result in conservative campaigns that are prone to missing out on unknown opportunities and discovering optimized opportunities that an automated data-driven experimentation framework could discover.
[0012] Second, although no comprehensive split testing frameworks currently exist, certain sectors of digital advertising do have a variety of effective and efficient point solution tools, but they are often so costly as to be prohibitive or create a highly inefficient use of resources. Even when advertising campaign operators have invested in tools that attempt to address large quantities of split tests, these tools typically do not provide adequate robustness, well-designed taxonomies, or automated decision making protocols. This can result in tests and experiments that performed with limited or flawed scope, frequency, dependability, and other drawbacks leading to sub- optimal results. As such, these tools may yield low levels of confidence and subsequent low returns on investments.
[0013] The concepts herein address these two market gaps deficiencies by providing improved automation in the decision-making process inherent to current split testing. Based on the implementation and application of defined goals, target success metrics, and known constraints, these automations can more effectively attribute causes to effects. They can also provide improved creation and maintenance of digital libraries of design assets, actionable best-practices methods, and decision history. These can be implemented efficiently with automated direct-response advertising servers that continuously drive additional design decisions during split testing procedures, including as applied to advertising pages and their sub-parts [0014] Those in the art will recognize various other limitations, issues, and problems that currently exist, in addition to opportunities that may be created based on implementation of the features and concepts described herein.
[0015] Other implementations of the systems, methods, and devices described herein are contemplated as well. As such, it should be understood that advertisers for many industries would benefit from the features disclosed herein. For example, these systems, methods, and processes can be applied to sales of shippable products, non-shippable products such as e-books, straight sell products, subscriptions, trial memberships, lead-generation products, and others.
[0016] Various alterations and modifications are contemplated without departing from the overall spirit and scope of this disclosure. The configuration of these systems, methods, and devices is described in detail by way of various embodiments which are only examples.
[0017] Other systems, devices, methods, features and advantages of the subject matter described herein will be or will become apparent to one with skill in the art upon examination of the following figures and detailed description. It is intended that all such additional systems, devices, methods, features and advantages be included within this description, be within the scope of the subject matter described herein, and be protected by the accompanying claims. In no way should the features of the example embodiments be construed as limiting the appended claims, absent express recitation of those features in the claims.
BRIEF DESCRIPTION OF THE DRAWING(S)
[0018] The details of the subject matter set forth herein, both as to its structure and operation, may be apparent by study of the accompanying figures, in which like reference numerals refer to like parts. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the subject matter. Moreover, all illustrations are intended to convey concepts, where relative sizes, shapes and other detailed attributes may be illustrated schematically rather than literally or precisely.
[0019] Illustrated in the accompanying drawing(s) is at least one of the best mode embodiments of the present invention. In such drawing(s):
[0020] FIG. lA is an example embodiment of a basic network setup diagram.
[0021] FIG. IB is an example embodiment of a network connected server system diagram. [0022] FIG. 1C is an example embodiment of a user mobile device diagram.
[0023] FIG. 2 depicts an example embodiment of components of an experimental split testing platform.
[0024] FIG. 3A depicts an example embodiment overview diagram of a partial delineation of various parts of an experiment.
[0025] FIG. 3B depicts an example embodiment of a scheduling components block for use in experimental design.
[0026] FIG. 3C depicts an example embodiment of a web-page elements components block for use in experiment design.
[0027] FIG. 3D depicts an example embodiment of a landing page guideline elements block for use in experiment design.
[0028] FIG. 3E depicts an example embodiment of a landing page checklist items block for use in experiment design.
[0029] FIG. 3F depicts an example embodiment of a proven elements block for use in experiment design.
[0030] FIG. 3G depicts an example embodiment of an information block for use in experiment design.
[0031] FIG. 3H depicts an example embodiment of a specific elements block for use in experiment design.
[0032] FIG. 31 depicts an example embodiment of a types block for use in experiment design.
[0033] FIG. 3 J depicts an example embodiment of a checklist block for use in experiment design.
[0034] FIG. 3K depicts an example embodiment of a sliders elements block for use in experiment design.
[0035] FIG. 3L depicts an example embodiment of a headline elements block for use in experiment design.
[0036] FIG. 3M depicts an example embodiment of a trust indicators block for use in experiment design. [0037] FIG. 3N depicts an example embodiment of an images block for use in experiment design.
[0038] FIG. 30 depicts an example embodiment of a VSL basic elements block for use in experiment design.
[0039] FIG. 3P depicts an example embodiment of a sales pages elements block for use in experiment design.
[0040] FIG. 3Q depicts an example embodiment of a checkout pages elements block for use in experiment design.
[0041] FIG. 4 A depicts an example embodiment taxonomy overview diagram of taxonomies of demographics, optimizations, and implemented templates that can be employed in automated split testing.
[0042] FIG. 4B depicts an example embodiment of a taxonomy template listing block for use in automated split testing.
[0043] FIG. 4C depicts an example embodiment of a taxonomy demographic elements block for use in automated split testing.
[0044] FIG. 4D depicts an example embodiment of a taxonomy methods of influence variables listing block for use in automated split testing.
[0045] FIG. 4E depicts an example embodiment of a taxonomy user experience listing block for use in automated split testing.
[0046] FIG. 4F depicts an example embodiment of a taxonomy branding equals depositing listing block for use in automated split testing.
[0047] FIG. 4G depicts an example embodiment of a taxonomy value proposition listing block for use in automated split testing.
[0048] FIG. 4H depicts an example embodiment of a taxonomy closer listing block for use in automated split testing.
[0049] FIG. 41 depicts an example embodiment of a retargeting information block for use in automated split testing. [0050] FIG. 5 A depicts an example embodiment diagram of applications and interactions between components of an experimental split testing platform and taxonomies as applied to or by a best practices library.
[0051] FIG. 5B depicts an example embodiment taxonomy overview diagram of taxonomies of media sources that can be employed in automated split testing.
[0052] FIG. 5C depicts an example embodiment taxonomy overview diagram of taxonomies of consumer devices that can be employed in automated split testing.
[0053] FIG. 5D depicts an example embodiment taxonomy overview diagram of taxonomies of connection types that can be employed in automated split testing.
[0054] FIG. 5E depicts an example embodiment taxonomy overview diagram 550 of taxonomies of new account types that can be employed in automated split testing.
[0055] FIG. 6A shows an example embodiment of a system process flow diagram.
[0056] FIG. 6B shows an example embodiment of a business to customer sales flow diagram.
[0057] FIG. 6C shows an example embodiment of an automated feedback process flow diagram.
[0058] FIG. 6D shows an example embodiment of an internal business decision protocol flow diagram.
[0059] FIG. 6E shows an example embodiment of a customer charge adjustment diagram.
[0060] FIG. 7 shows an example embodiment of a system architecture and process flow diagram.
[0061] FIG. 8 depicts an example embodiment diagram of a real-time complex event and data processing architecture.
[0062] FIG. 9 depicts an example embodiment diagram of an open API and micro services architecture.
[0063] FIG. 10 depicts an example embodiment diagram of an elastic container as a service architecture.
[0064] FIG. 11 depicts an example embodiment diagram of an entity relationship diagram architecture or data model.
[0065] FIG. 12 depicts an example embodiment diagram of a system architecture. [0066] FIG. 13 depicts an example embodiment of an overall CAMP platform diagram.
[0067] FIG. 14A shows example embodiment diagrams of how advertising media can be displayed on user interfaces via email, native displays, social media, and email, respectively, on various user devices.
[0068] FIG. 14B shows an example embodiment user interface diagram of an initial advertisement offer page.
[0069] FIG. 14C shows an example embodiment user interface diagram of a secondary page.
[0070] FIG. 14D shows an example embodiment user interface diagram of a shopping cart review page.
[0071] FIG. 14E shows an example embodiment user interface diagram of a shipping information page.
[0072] FIG. 15 shows an example embodiment user interface diagram of an Operational Dashboard.
[0073] FIG. 16A shows an example embodiment user interface diagram of a visual campaign performance at a glance heat-zone report.
[0074] FIG. 16B shows an example embodiment user interface diagram of a traffic dashboard.
[0075] FIG. 16C shows an example embodiment user interface diagram of an affiliate manager dashboard.
[0076] FIG. 16D shows an example embodiment user interface diagram of a retention report.
[0077] FIG. 17 shows an example embodiment user interface diagram of an affiliate dashboard on a mobile user device platform
[0078] FIG. 18 shows an example embodiment user interface diagram of an affiliate dashboard on a user device, such as a desktop computer.
[0079] FIG. 19 shows an example embodiment user interface diagram of an affiliate Sub ID report.
DETAILED DESCRIPTION
[0080] Before the present subject matter is described in detail, it is to be understood that this disclosure is not limited to the particular embodiments described, as such may vary. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting, since the scope of the present disclosure will be limited only by the appended claims.
[0081] Mobile applications, mobile devices such as smart phones/tablets, application programming interfaces (APIs), databases, social media platforms including social media profiles or other sharing capabilities, load balancers, web applications, page views, networking devices such as routers, terminals, gateways, network bridges, switches, hubs, repeaters, protocol converters, bridge routers, proxy servers, firewalls, network address translators, multiplexers, network interface controllers, wireless interface controllers, modems, ISDN terminal adapters, line drivers, wireless access points, cables, servers and others equipment and devices as appropriate to implement the methods and systems described herein are contemplated.
[0082] As described herein, multi-variate testing and split-testing can be related in various embodiments. Running testing scenarios against multiple user interface advertising elements, metrics, and decision making protocols is highly beneficial for reasons described previously and since the embodiments described do not follow prior known approaches to split testing, which are generally used to test one variable against a small number of advertisement designs for a single offer. Multivariate testing can be scoped at a test page level and can involve testing several page components while also running multi-track testing for different products, offers, deals, and other data.
[0083] FIG. lA is an example embodiment of a basic network setup diagram 100. As shown in the example embodiment, network setup diagram 100 of can include multiple servers 140, 150 which can include applications distributed on one or more physical servers, each having one or more processors, memory banks, operating systems, input/output interfaces, power supplies, network interfaces, and other components and modules implemented in hardware, software or combinations thereof as are known in the art. These servers can be communicatively coupled with a wired, wireless, or combination network 110 such as a public network (e.g. the Internet, cellular- based wireless network, cloud-based network, or other public network), a private network or combinations thereof as are understood in the art. Servers 140, 150 can be operable to interface with websites, webpages, web applications, social media platforms, advertising platforms, and others. As shown, a plurality of end user devices 120, 130 can also be coupled to the network and can include, for example: user mobile devices such as smart phones, tablets, phablets, handheld video game consoles, media players, laptops; wearable devices such as smartwatches, smart bracelets, smart glasses or others; and other user devices such as desktop devices, fixed location computing devices, video game consoles or other devices with computing capability and network interfaces and operable to communicatively couple with network 110.
[0084] FIG. IB is an example embodiment of a network connected split testing server system diagram 140. As shown in the example embodiment, a split testing server system can include at least one user device interface 147 implemented with technology known in the art for facilitating communication between system user devices and the server and communicatively coupled with a server-based application program interface (API) 150. API 150 of the server system can also be communicatively coupled to at least one tracking and routing engine 148 for communication with web applications, websites, webpages, websites, social media platforms, and others. As such, it can access information via a network when needed. API 150 can also be communicatively coupled with a best practices and decision history database 141, a design content and asset database 142, a taxonomy database 143, an event store and data warehouse database 144, a testing and matching database 145, and a scoring and rating database 146 combinations thereof or other databases and other interfaces. API 150 can instruct databases 141, 142, 143, 144, 145, 146 to store (and retrieve from the databases) information such as variables, elements, best practices, account information, or others as appropriate. Databases 141, 142, 143, 144, 145, 146 can be implemented with technology known in the art, such as relational databases, object oriented databases, combinations thereof or others. Databases 141, 142, 143, 144, 145, 146 can be a distributed database and individual modules or types of data in the database can be separated virtually or physically in various embodiments. Further, best practices database 141 can store data related to best practices, experimental design database 142 can store data related to designing experiments, taxonomy database 143 can store data related to taxonomies, event store and data warehouse database 144 can store data related to events and ratings, testing and matching database 145 can store information related to testing and matching, and scoring and rating database 146 can store information related to scoring and rating, as elaborated on elsewhere herein.
[0085] FIG. 1C is an example embodiment of a user mobile device diagram 121. As shown in the example embodiment, a user mobile device 121, can includes a network connected split testing application or component 122 that is installed in, pushed to, or downloaded to the user mobile device or its internet browser application. In many embodiments user devices are touch screen devices such as smart phones, phablets or tablets which have at least one processor, network interface, camera, power source, memory, speaker, microphone, input/output interfaces, operating systems and other typical components and functionality.
[0086] In some embodiments, split testing application 122 may not be installed on user device 121. Instead, it may be replaced by one or more of a system administrator application, an advertiser application, an affiliate application, a consumer application, or others. In some embodiments, a dedicated application for any of these may not be installed on user device 121. Instead, users may access a portal via a web browser installed on device 121, which may be dedicated or hybrids of system management portals, advertiser portals, affiliate portals, consumer portals, or others.
[0087] FIG. 2 depicts an example embodiment diagram 200 of elements of an experimental split testing and decision making platform. As shown in the example embodiment, this platform can be understood as an Automated Aggregated Multivariate Testing process 202, which can include several different steps. In the example embodiment, the full-featured testing platform includes nine steps. These can be grouped differently in some embodiments and in some embodiments various additional steps or sub-steps can also be employed.
[0088] As shown in diagram 200, an initial step 204 can include defining goals. Once goals are defined, a second step 206 can be to identify measurements and barriers. A third step 208 can include constructing one or more hypotheses. Next, a fourth step 210 can be a prioritization step in which the goals and hypotheses are prioritized. Examples can include order of importance, expected likelihood of success, expected ease of testing, and others. A fifth step 212 can be to design one or more experiments for the highest priority item identified in step 210. Once designed, the experiment can then be executed in a sixth step 214. Results of the experiment can be measured in a seventh step 216. Based on the results of the experiment, winners can be determined based one or more criteria in an eighth step 218. These winners can be winning designs that are automatically promoted, deployed, and adopted as new performance benchmarks. Additional or new experimental and variant designs of the continuous multivariate testing process can then be measured against these new performance benchmarks. In a ninth step 220, best practices can be created, revised or modified, or eliminated. [0089] Currently, no available integrated automation platforms address or successfully employ even half of the nine steps shown in diagram 200. For example, most available platforms do not engage in experimental design step 212, experiment execution step 214, result capturing step 216, or winner identification step 218 and do not employ the associated taxonomy, metadata and scoring engines. Even those platforms that do employ or support part of this process are known to have limited scope, efficiency, and accuracy. Further, no analysis platforms include tools that are directed to addressing step 220, in which best practices are created, revised, modified, or eliminated based the decision-making protocols and processes that are subjected to goal definition step 204, barrier & measurement identification step 206 and hypothesis construction step 208 and priority setting step 210.
[0090] Through the generation, use, updating, and maintenance of various taxonomies, the platform shown in the example embodiment diagram 200 can perform all steps described, which then provides detailed, accurate, and effective results. In some embodiments, proprietary taxonomies are applied to these steps that can be generated using publicly known taxonomies, in addition to privately generated taxonomies. Application of these taxonomies is used to generate and maintain a best practices library, as well as implementing these best practices in automated direct response advertising server platforms.
[0091] The best practices library can incorporate and reflect direct and indirect results from the constantly evolving correlation of multiple variables from one or more processes and artifacts that form the various portions or whole of the automated aggregated multivariate testing process. For example, a first type of best practice process stored in the library may affect how hypotheses are constructed based on confidence levels of lessons learned and extrapolated projections. A second may affect how prioritization is operable to perform automatic and dynamic adjustments. A third may affect how the parameters of the experiment will be executed. A fourth may affect the weighing of various metrics and measurements, and the logical combinations of measurements. A fifth may affect one or more combinations of hypotheses, measurements and designs.
[0092] It should be understood that one or more processors of split testing servers are operable to perform these steps, as stored in non-transitory memory on the server or accessible via networking interfaces of the server. These processors can operate in parallel and handle large quantities of data, as required by various experiments and other operations. Human interaction and interruption of the various processes described herein can be implemented in various embodiments and can include prompts, data entry, selection and manipulation of various elements and variables, all via a user interface, including audio, visual, manual, or other interactive methods.
[0093] FIG. 3 A depicts an example embodiment overview diagram 300 of a partial delineation of various parts of an experiment. These include variables that can be manipulated and tracked during the automated split testing process against various applicable hypotheses. These variables can also be manipulated and tracked during the automated process in order to track and modify best practices. Test iterations are one example of one variable that can be manipulated, where iteration numbers can be increased or decreased for speed or efficiency, or to affect other outcomes including resource use. Overview diagram 300 can be understood as an extension of experiment design step 212 of FIG. 2.
[0094] In particular, overview diagram 300 depicts hierarchically how elements and scheduling of a design can be manipulated. As shown in the example embodiment, overall experimental design block can be broken down at a high level into scheduling block 304 and elements block 308.
[0095] Hierarchically, scheduling block 304 and its subordinate blocks depict how one or more specific combinations of variable scheduling variables can be manipulated to simulate a specified duration or until a targeted level of statistical significance or confidence threshold is reached. Also, these can be tested against variables under elements block 308 and its hierarchy of subordinate element blocks. These depict how variable design elements of an advertisement can be tested against other variable design elements. These can also be tested against controlled design elements, and held as controlled themselves, as can scheduling variables.
[0096] In the example embodiment, scheduling block 304 can further be broken down into a scheduling variables block 306, as shown in FIG. 3B. These scheduling variables or targets can include: number of conversions, number of variations, confidence level, amount of traffic, lift percentage, and others.
[0097] Elements block 308 can be broken down into a number of subordinate component blocks, such as email block 310, analysis components block 312, and landing pages block 314. Analysis components block 312 can include a number of subordinate component blocks, such as: lead magnets block 334, page layouts block 336, headlines block 338, product descriptions, copies, Calls to Action (CTA's), trust indicators block 340, point of action assurances, forms, images block 342, badges, videos block 344, sales pages block 346, shopping cart and checkout flow, checkout pages block 348, and others as shown in FIG. 3C. Examples of point of action assurances can include free delivery offers, security offers, and others. Examples of badges or tags can include new, special, limited, sale, reduced price, clearance, and others.
[0098] Email block 310 can further be broken down into a number of subordinate component blocks, such as proven block 324, information block 328, and non-impacting elements block 330. Proven block 324 can be broken down into proven elements block 326 including eight or more angles. These angles can include self-interest, curiosity, offer, urgency or scarcity, humanity, news, social proof, story, and others, as shown in FIG. 3F.
[0099] Information block 328 can include components such as shorter, second subline, image included in CTR, pain equals gain, CSS buttons, single step registration, Unicode symbols to highlight, and others, as shown in FIG. 3G.
[00100] Non-impacting elements block 330 can further be broken down into specific elements block 332. These elements can include long versus short emails, banner color changes, picture specifications such as face of a comp versus a product in the banner, more versus less CTAs, and others, as shown in FIG. 3H.
[00101] Landing pages block 314 can further be broken down into landing pages guidelines block 316 and landing pages checklist block 320. Landing pages guidelines block 314 can include one or more guideline elements in guideline elements block 318. Examples of these elements can include basics, product images, reiterate CTA below the fold, ditch y-axis jump, reports versus videos, form length coincides with offer, and others, as shown in FIG. 3D. Landing pages checklist block 320 can include one or more checklist items in a checklist items block 322. Checklist item elements can include items relating to the effectiveness and visual presentation of an advertisement. Examples can include market callout, clear and concise, easily understood, compelling headline, CTA above the fold, contrasting button color, custom button text, social proof, limited navigation, use visual cues, hero shot, limited form fields, source congruency, brand consistency, enable sharing, visible privacy policy and terms of service (TOS), and others, as shown in FIG. 3E.
[00102] Lead magnets block 334 can further be broken down into lead magnet types block 350 and lead magnets checklist block 354. Types block 350 can further be broken down into element types block 352, including: report or guide, toolkit or resource list, SW download or trial, quiz or survey, blind or sales material, cheat sheet or handout, video training, discount or free shipping, assessment or test, step 1 of order form, and others, as shown in FIG. 31. Checklist block 354 can further be broken down into checklist elements block 356, including: ultra-specific, one big thing, speaks to a known or desired result, immediate gratification, shifts the relationship, high perceived value, high actual value, rapid consumption, and others, as shown in FIG. 3 J.
[00103] Page layouts block 336 can further be broken down into "Don't distract" sliders block 358 and general issues block 360, which can include visual hierarchy, the fold, visual cues, involvement devices such as polls, triggered pops, and others. Sliders block 358 can further be broken down into sub-sliders block 362 and animations block 366. Sliders block 358 can further be broken down into sliders elements block 364, including: compress images to reduce load tin, turn off auto-slide, if you cannot turn off auto-slide then increase the time between slide panes, and others, as shown in FIG. 3K.
[00104] Headlines block 338 can further be broken down into headline elements block 368, which can include "How to .", "Who else wants ?", " in 3 simple steps!", and others, as shown in FIG. 3L. Trust indicators block 340 can further be broken down into trust indicators elements block 370, which can include privacy policies, security seals, guarantees, testimonials, and others, as shown in FIG. 3M. Images block 342 can further be broken down into images elements block 372, which can include real people, avoid un-edited stock photographs, use image captions, images point to CTAs, images with minimal copy, and others, as shown in FIG. 3N
[00105] Videos block 344 can further be broken down into no video fakeouts block 374 and VSL basics block 376. VSL basics block 376 can further be broken down into basics elements box 378, which can include length, remove video controls, inactive page pause, time triggered CTA, autoplay, trigger long form sales letter with CTA button, and others, as shown in FIG. 30.
[00106] Sales pages block 346 can further be broken down into sales pages elements block 380, which can include fonts matter, reiterate CTAs, testimonials and social proof, visual deliverables consistent with offer, strong advertisement scent, and others, as shown in FIG. 3 P. Checkout pages block 348 can further be broken down into checkout pages elements block 380, which can include trust seals, price front and center, visual deliverables, linked policies, no navigation, and others, as shown in FIG. 3Q.
[00107] FIG. 4 A depicts an example embodiment taxonomy overview diagram 400 of taxonomies of demographics, optimizations, and implemented templates that can be employed in automated split testing. Thus, taxonomy overview diagram 400 depicts a hierarchy of qualitative measurements for various testing or production optimization variables and associated demographics data. These variables and data can be measured, scored, and assessed or analyzed when applied by decision making protocols. As such, they can produce one or more desired or required goal oriented actions. Further, they can be manipulated manually, automated, or combinations thereof. This can allow the system to make decisions based on simulations and actual data, such as what offer variant, version, and device template are the most suitable to specific online traffic sources and consumer demographics. For example, given a particular set of goals for an optimization analyst, client, or partner, the system can present the best possible outcome.
[00108] As shown in the example embodiment, overall taxonomy block 402 can be broken down at a high level into template name block 404, demographics block 408, and optimization variables block 412.
[00109] Template name block 404 can be further broken down into a listing of templates in template listing block 406. This listing can include names or indicators of devices, product types, template types, versions, and others, as shown in FIG. 4B. Similarly, demographics block 408 can be further broken down into demographic elements block 410. These elements can include sex, gender, age, educational level, income level, credit card type, credit rating or credit worthiness, social persona, religion, family size, political affiliation, ethnicity, language, country, state or province, county, city, zip code, and many others, as shown in FIG. 4C.
[00110] Optimization variables block 412 can include a number of subordinate blocks. As shown in the example embodiment, these can include lead and presale block 414, radical redesign block 416, methods of influence block 418, user experience (UX) block 422, relational equity block 426, sub-sections block 434, and others. Methods of influence block 418 can be further broken down into a methods of influence variables listing 420. Examples in this list can include reciprocity (R), commitment and consistency (CC), social proof (SP), likeability (L), authority, scarcity or urgency, or others, as shown in FIG. 4D. UX block 422 can be further broken down into UX listing block 424. Examples in this list can include VSL, autoplay versus click to play, headline above video, product deliverables, graphics, background color, time on page, and others, as shown in FIG. 4E.
[00111] Relational equity block 426 can be broken down into selling equals withdrawing block 428 and branding equals depositing block 430. Branding equals depositing block 430 can be further broken down into branding equals depositing block 432. This block can further be broken down into brand types block 454 such as "Make 'em laugh," "Make 'em cry," "Make 'em feel a part of something," as shown in FIG. 4F, and deliver value in advance block 456.
[00112] Deliver value in advance block 456 can further be broken down into other retargeting information block 460 and platforms block 462. Other retargeting information block 460 can include introduction, social media traffic, video retargeting, plug leaky sales funnel with social media, and others, as shown in FIG. 41.
[00113] Sub-sections block 434 can include a variety of optimization variables sub-sections. As shown in the example embodiment, these can be broken down into a how it works block 438, testimonials block 440, what is it block 442, scientific proof block 444, value proposition above the fold block 446, closer below the fold block 450, and others. Value proposition above the fold block 446 can be further broken down into an above the fold listing block 448. This listing can include headlines, sub-headlines, guarantees, and others, as shown in FIG. 4G. Below the fold block 450 can be further broken down into below the fold listing block 452. This listing can include options, price, buy button, and others, as shown in FIG. 4H.
[00114] FIG. 5A depicts an example embodiment diagram 500 of applications and interactions between components of an experimental split testing platform and taxonomies as applied to or by a best practices library. Steps in an automated split test are shown and described in the lower half of FIG.5 are described in further detail with respect to FIG. 2. As shown in the example embodiment, these can be applied to, change, and be stored in a best practices library 502. This can also be affected by, access, and take into account information from taxonomies 504. As described previously, taxonomies 504 can include demographics 506, template names 508, optimization variables 510, traffic 514 and others. Also, as described previously, optimization variables can be further broken down into listing 512, including lead and pre-sale, relational equity, radical redesign, sub-sections, user experience (UX), methods of influence, and others. Traffic block 514 can further be broken down into media source, consumer device, connection type, new account, and others, each further broken down with respect to FIGs. 5B-5E.
[00115] To elaborate, possible conclusions from split testing experiments, can affect best practices library 502 in step 220. This provides the automated system the ability to add, revise, or eliminate items in the best practices library as they relate to data, information, and other content. As such, observed and measured patterns, results, economics, and other information can be continuously collected, analyzed, and refined into theories that can be reused in the form of best practices. This information can be stored in non-transitory memory, such as one or more databases, in an indexed and publicly or privately accessible library. In some embodiments, this can lead to a machine or computer learning system that functions as a corporate, industry, scientific, or one of many other knowledge bases.
[00116] FIG. 5B depicts an example embodiment taxonomy overview diagram 520 of taxonomies of media sources that can be employed in automated split testing. As shown in the example embodiment, sources can include social media sources, email sources, search engine optimization (SEO) sources, and other sources. Examples of social media sources include Facebook, Instagram, Twitter, and others. Examples of SEO sources include Google and others. Examples of other sources include Native sources, Display sources, Telephonic sources, and others. Any of the media source (Social, Email, SEO, Others) can be obtained by media buyers/affiliates, as opposed to merchants themselves, in the performance-based or cost-per- acquisition (CPA). In this case, the traffic is called PAID media.
[00117] FIG. 5C depicts an example embodiment taxonomy overview diagram 530 of taxonomies of consumer devices that can be employed in automated split testing. As shown in the example embodiment, consumer devices can include iPhone, Android, Desktop, iPad, Android Tablet, and others.
[00118] FIG. 5D depicts an example embodiment taxonomy overview diagram 540 of taxonomies of connection types that can be employed in automated split testing. As shown in the example embodiment, connection types can include 3G, 4G, Broadband Residential, Broadband Commercial, and others. [00119] FIG. 5E depicts an example embodiment taxonomy overview diagram 550 of taxonomies of new account types that can be employed in automated split testing. As shown in the example embodiment, new account types can include yes, no, re-targeted, and others.
[00120] FIG. 6A shows an example embodiment of a system process flow diagram 600. As shown in the example embodiment, various internal flows are shown by different arrow types, with differing dashes for each type of flow that match with the arrow types and dashes in FIGs. 6B-6E. As such, each process shown will be described in further detail with respect to FIGs. 6B-6E.
[00121] In example embodiment diagram 600, advertising impressions 602 viewed by customers 601 can be transmitted to a camp platform or marketplace 650. Camp platform 650 can include at least one router 604, an anti-fraud and security module (called SITEprotect™ for example), an offer server environment (called CAMPsites™ for example), one or more groups of associated merchant offers 606 (called SMARTtunnel™ for example), customer relationship management (CRM) system adapter 608, sale units management 620 (called CAP management for example), advanced analytics engine 622, advertiser portal 624, affiliate portal 628, marketplace optimization and auto-machining engine 630, automated aggregated multivariate testing engine 632, adaptive offer page or offer funnel 634, and others. As shown in the example embodiment, affiliate portal 628 allows one or more affiliates 605 to access Camp platform 650. Similarly, advertiser portal 624 allows one or more advertisers 603 to access Camp platform 650.
[00122] Third-party data stored in non-transitory memory on a third-party database 636 can be accessed via a network connection by advanced analytics engine 622. Likewise, Merchant CRMs 610, fulfillment centers 618, call centers 624 and chargeback management 626 can each send their own respective data to Camp platform 650 via a network connection. Processes shown
[00123] FIG. 6B shows an example embodiment of a business to customer sales flow diagram 6000. As shown in the example embodiment, initially, a customer or other advertisement viewing user 601 can view an advertisement on a user interface of a network connected user device via a network in step 6002. This advertisement can be preset with an ad impression tracker 602 that logs the impression and transmits data regarding the advertisement impression to CAMP platform or marketplace 650 for processing in step 6004 via the network, where it is received by router 604 of Camp platform 650. Router 604 processes the data in step 6006 and can then route this visitor or traffic through the combination of functional modules that can be called by SITEprotect, CAMPsites, and SMARTtunnel 606 in step 6008. Customer orders identified by 606 can then be sent to the CRM-agnostic adapter 608 for further processing in step 6010. CRM adapter 608 processing can include identifying prospects, customers, orders, and other desirable information according to rules programmed and stored in non-transitory memory. Once this information has been identified it can be sent via a network connection to one or more merchant CRMs 610 for processing in step 6012.
[00124] In the example embodiment, the process then continues outside of Camp platform 650, where merchant CRM 610 receives the information, processes it, and sends payment information to a merchant bank payment processor 612 for processing in step 6014. Payment processor 612 can then move forward with processing identified payments by transmitting them to an external credit card network 614 for processing in step 6016. When processed, credit card network 614 can then inform a customer credit card issuing bank 616 to charge a customer's credit card who has purchased an item offered in the original advertisement in step 6018. Additionally, merchant CRMs 610 can transmit order fulfillment information to fulfillment center 618 for processing for identified orders in step 6020.
[00125] FIG. 6C shows an example embodiment of an automated feedback process flow diagram 6100. As shown in the example embodiment, advanced analytics engine 622 can receive information from various sources and process it in step 6102 in order to determine the accuracy, efficiency, and other value metrics for advertising impressions. Next, Merchant CRM 610 can transmit any identified offer information to CAP management 620 of Camp platform 650 for processing in step 6104 via a network connection. CAP management 620 can then determine any offer KPI's and send this information along to advanced analytics engine 622 for processing in step 6108. Likewise, data identified in processing step 6102 by merchant CRM 610, step 6110 by SITEprotect/CAMPsites/SMARTtunnel 606, step 6118 by fulfillment center 618, step 6122 by call center 624, and step 6126 chargeback management 626 can send data to Camp platform 650 via the network connection for processing by advanced analytics engine 622 in steps 6106, 6112, 6116, 6120, 6124, and 6128 respectively. In various embodiments, this data can generally be understood to include post sale events and KPFs, although other relevant and valuable information can be sent as well. As shown, additional KPFs, ratings, and pertinent data gleaned from processing by SITEprotect/CAMPsites/SMARTtunnel 606 and marketplace optimization and auto-matching engine 630 can also be sent to advanced analytics engine 622 for processing. Additionally, one or more processing steps can be run concurrently or simultaneously in parallel or can be subject to priority processing based on value, order of operations, first in first out, or other metrics, as appropriate.
[00126] FIG. 6D shows an example embodiment of an internal business decision protocol flow diagram 6200. As shown in the example embodiment, advertisers 603 with data in step 6202 can access, upload data, and interact with to Camp Platform 650 via advertiser portal 624. This data can be processed by advertiser portal 624 in step 6204. Likewise, advanced analytics engine 622 can send data identified in step 6210 to advertiser portal 624 for processing in step 6212, which can then be accessed by advertisers 603. This data can include campaign KPFs, CLV, and others, as appropriate.
[00127] Advance analytics engine 622 can transmit data processed data identified in step 6214 for transmission to affiliate portal for processing in step 6216 that can then be accessed and interacted with by affiliates 605. This data can include information regarding traffic quality, EPC, KPFs, and others, as appropriate.
[00128] Advanced analytics engine 622 can also transmit processed data identified in step 6218 for processing and storage by marketplace optimization and auto matching engine 630 in step 6220. Likewise, information can be sent to or accessed by advanced analytics engine 622 from engine 630 for processing in step 6224. Engine 630 can identify processed data in step 6226 for processing and use in step 6228 by aggregated split testing engine 632. After processing data identified in step 6232 can be sent to router 604 for processing in step 6234. Likewise, processed data can be sent back to engine 630 for further processing in step 6230. Engine 630 can send data processed in step 6236 to adaptive sales funnel 634 for processing in step 6238 and transmission to SITEprotect/CAMPsites/SMARTtunnel 606 for processing in step 6240.
[00129] At various times during business decision protocol flows, advanced analytics engine 622 can also identify data in step 6242 for transmission to third party data repositories 636 for processing and storage in step 6244. Likewise, third party repositories 636 can transmit data or advanced analytics engine 622 can access data for processing in step 6246.
[00130] FIG. 6E shows an example embodiment of a customer charge adjustment diagram 6300. As shown in the example embodiment, customers 601 occasionally have issues with orders and may wish to implement a chargeback, cancel trials, refund products, or otherwise reverse transactions. As shown in the example embodiment, once a customer 601 has made this decision in step 6302, they may transmit a chargeback operation to customer credit card issuing bank 616 for processing in step 6304. Once processed, information can be sent to credit card network 614 for processing in step 6306. Once processed, information can be sent to merchant bank payment processors 612 for processing and resolution in step 6308. Likewise, customer 601 can transmit information regarding canceled trials or product refunds identified in step 6310 to call center 624 for processing in step 6312. Once processed, charge reversals or adjustment changes can be sent to merchant bank processors 612 for processing and resolution in step 6314.
[00131] FIG. 7 shows an example embodiment of a system architecture and process flow diagram 700. As shown in the example embodiment, a CAMP platform 702 can receive advertising impression data from user devices displaying advertisements to customers 701 via a network 704 at a router 706. Router 706 can receive the data and analyze it to determine the advertisement and traffic sources. This data can be sent to a site project 724, where traffic sources 706-706 can pass the data through a SMART funnel 708 and a CAMP manager can determine different offers 712a-712x and send them to offer pages 714a-714x of CAMP sites 716. Once finished, offer data can be passed to CRM adapters 718, which can interact with external CRM 720, and transmit data including customer and order information to merchant CRM 722. SITE project 724 is also linked with Merchant CRM 722, fulfillment 726, call center 728, and charge backs processor 730. Charge backs processor 730 can send data to big data storage 732, which can be implemented with Redshift, Aurora, S3, and others. Likewise, big data storage 732 can receive and access third party data from third party data storage 734. Big data storage 732 can send data or be accessed by BI systems 736 operating meta data models, Olap cubes, data marts, and others. Traffic quality, earnings per click and other KPI's can be sent to affiliate portal 738 for accessing or transmitting data to affiliates 707 via network 704. BI systems 736 can also send campaign KPI, customer lifetime value, and other metric information to advertiser portal 740 for accessing or sending information to advertisers 705, via network 704. Advertiser and affiliate KPFs and other information can be sent from BI systems 736 to a system operator internal business dashboard 742 for accessing, manipulation, and analysis by administrative personnel 703, which can include group account managers, board of directors, system analysts, operations, and others.
[00132] Big data storage can also share data with big data and event processing engine 744, which can be implemented with SQS, Lambda, Redis, Kinesis, and others. Likewise, engine 744 can share data with data science and machine learning engine 746, which can operate a real-time marketplace and optimization functions. Data can be sent from engine 746 to multivariate testing or split testing engine 748, which can in turn send resulting data to router 706. Engine 746 can also send data to adaptive sales funnel and real time event processing engine 750, which can transmit resulting data to smart funnel 708. Additionally, system operator internal business dashboard 742 can transmit configuration information and other information to site project 724.
[00133] FIG. 8 depicts an example embodiment diagram 800 of a real-time complex event and data processing architecture. As shown in the example embodiment, an internal producer 802 who has an account created can transmit data generated on a user device to system based kinesis streams 804, which can also receive data via third party integration web traffic 806, such as new sales data. Streams 804 can include data related to various consumers, which are then transferred as event streams to portal 808. Portal 808 can generate service requests 810, which are sent to kinesis streams 804 and to a database 812, such as a Redshift database. Via web sockets, this information can also be shown in browser 816 for manipulation or use by users. Kinesis streams 804 can also transmit data to an event aggregator, such as Event Aggregation Lambda 818. Once aggregated, this data can be processed and sent via firehose 820 for storage in database 812.
[00134] FIG. 9 depicts an example embodiment diagram 900 of an open API and micro services architecture. As shown in the example embodiment, an automation engine 904 can send data for use in external integrations architecture 902. Architecture 902 can include web applications, mobile applications, native applications, and others. Automation engine 904 can process automated deployment, elastic scaling, fault tolerances, and other information and can be fully automated in some embodiments.
[00135] Automation engine 904 can include a full stack JS, data information 908, real time event processing and reactivity 910, and SQL-like analysis and data transformation 912, and others. Data information 908 can be stored in databases, such as Redshift, Dynamo databases, or others. Real time event processing and reactivity block 910 can process data from kinesis streams or others.
[00136] Full stack JS 906 can further be broken down into portals block 914, micro services block 916, JSON and REST API block 918, NPM shared libraries block 920, and others. Portals block 914 can be broken down into administrative, registration, affiliate, and advertiser portals, which can be independent or run on a similar system in some embodiments. Micro services block 916 can include security, affiliate, advertiser, and other services. [00137] FIG. 10 depicts an example embodiment diagram 1000 of an elastic container as a service architecture. As shown in the example embodiment, configuration, auditing, management, automation and batching, reporting, notifications, events, monitoring, security, and others can be various services 1002 provided by the system. Configuration can be handled by docker, Elastic Beanstalk, PM configuration modules, and others. Auditing can be handled using Redshift, Firehose, and others. Management can be performed using an Amazon based CLI. Automation and batching can be performed using Cron, Circle CI, or others. Reporting can be performed using Tableau, Chartist JS, or others. Notifications can be performed using web sockets, kinesis streams, or others. Events can include kinesis streams, firehose, or others. Monitoring can be performed using Amazon or other services. Security can be performed using Stormpath.
[00138] Also depicted are the presentation layer 1004, cache 1006, workflow 1008, service layer 1010, and cloud 1012. The presentation layer can be broken down into rendering, application state, and an ES 2015/Babel 6/Webpack combination. Rendering can be performed using React JS, while Application state can be processed using Redux. Cache 1006 can be maintained using Redis, workflow 1008 can be customized, and service layer operations can be performed using Swagger, Node, or others.
[00139] Cloud 1012 can be run on various platforms, such as Amazon's. This can perform distributed processing, operational data operations, warehousing, virtualization, operating system operations, include hardware, computational power, networking, and storage. Distributed processing can be performed using Amazon Elastic Beanstalk, Lambda, Node, or others. Operational data can be maintained on a Dynamo database. Warehousing can be performed using Redshift. Virtualization can be performed using a Docker. An operating system for use could be CentOS, Linux, or others. Hardware and computation can include Amazon's EC2. Networking can be performed using Amazon's VPC, elastic load balancers, or others. Storage can include SSD or others.
[00140] FIG. 11 depicts an example embodiment diagram 1100 of an entity relationship diagram architecture or data model. As shown, Merchants 1102 and sub-merchants 1104 can communicate with advertising agencies or campaign managers (advertisers) 1106 and sub- advertisers 1108. These can run campaigns 1110 and process orders 1112 for customers 1114. Order items 1116 can be related to products 1118, which can be broken down by category 1120, currency conversions 1122, currency codes 1124, tax codes 1126, vertical and sub vertical information 1128, territory, sub-territory, and sub -sub -territory 1130. Customer's 1114 can pay using credit card information 1132, including associated country, state, province, postal, and other information 1134.
[00141] Campaigns 1110 can be broken down into rounds 1140, and have associated managers 1136, types 1138, caps 1142 and camp-affiliate caps 1144 for affiliates and sub affiliates 1146. Affiliates and sub affiliates 1146 can also have affiliate caps. Suppliers, 1150, fulfillment centers 1152, and call centers 1154 can all access campaign information. Stratification tables 1156 and test tracks 1158 can also be maintained.
[00142] FIG. 12 depicts an example embodiment diagram 1200 of an integrated full-service business model employing an aggregated multivariate testing system at a customer acquisition engine level. As shown in the example embodiment, a full-service business model 1202 can include a customer acquisition engine 1204, product inventory and fulfillment engine 1206, customer support engine 1208, and chargeback management engine 1210. This full-service offering can convert visitor traffic 1212 into customers and sales when received from advertising platforms 1214 and also provide fulfillment, customer support and chargeback services for associated merchants' entire campaigns.
[00143] FIG. 13 depicts an example embodiment of a CAMP management platform diagram 1300. CAMP platform can automate core operational processes to power sales transactions. As shown in the example embodiment, CAMP platform can provide real time optimal data-driven matching between traffic, offers, and offer designs 1302 by leveraging assets and intelligence from the data warehouse 1304 to provide services for customers 1306, partner affiliates and client merchants. In some embodiments, integral to the real-time matching effort, features in SITEprotect security 1308, split testing and traffic routing 1310, SMARTfunnels 1312, CAMPsites 1314, CRM adapters 1316, and others can be employed. These can be based on data warehouse information 1304, including business intelligence system information 1318, real time marketplace optimization and machine learning engine information 1320, complex event processing 1322, big data storage 1324, and others. Complex event processing 1322 can be implemented with Lambda, SQS, Kinesis, and ElasticCache-Redis, or others. Big data storage 1324 can be implemented with RedShift, S3, or others. CAMP platform can also provide an internal system network management portal 1326, advertiser portal 1328, affiliate portal 1330, and others. [00144] SITEprotect security 1308 is operable to monitor real-time traffic for fraudulent and other abnormities then perform prevention based on blacklists and grey-lists, along with other functions.
[00145] Split testing and traffic routing 1310 can provide Automatic Aggregated Multi-Variate Testing. This can include testing between traffic sources, affiliate performances, service level agreements, consumer demographics, markets, product types, offer page templates, page elements, and others that include multiple tracks beyond split-testing (aggregated), multiple variables and an automated decision making protocol based on targeted goals and known constraints.
[00146] SMARTfunnels 1312 provides optimized allocation of campaign capacities and adjust traffic volume as desired or needed to maximize fill rates for campaigns while minimizing any sales losses. It can provide the ability to group and sequence similar products or offers in certain market segments. It can also facilitate the smooth traffic flow between equivalent offers or products, for instance as a previous offer runs out of sales inventory (CAP).
[00147] CAMPsites 1314 can allow accelerated front-end design, optimization, and deployment for advertisers that shortens launch cycles from weeks to days.
[00148] FIG. 14A shows example embodiment diagrams 1400, 1402, 1406, and 1408 of how advertising media can be displayed on user interfaces via email, native displays, social media, and email, respectively, on various user devices. Here, consumers may select or click on advertisements, arrive at a system offer landing page, and be taken through a sales funnel, built with best practices, that are designed to maximize conversion, average order value, and customer lifetime value.
[00149] FIG. 14B shows an example embodiment user interface diagram 1410 of an initial advertisement offer page. Here, product information such as price, markdowns, reviews, taglines, images, quantities, and others can be displayed. Users can choose a select button 1412 to be taken to a secondary page, as shown in FIG. 14C.
[00150] FIG. 14C shows an example embodiment user interface diagram 1414 of a secondary page. As shown in the example embodiment, additional products can be offered, along with product information, in order to grow the potential customer's order. Users can select an add to cart button 1416 for additional items or pass through to a next screen with a decline additional offers button 1418. [00151] FIG. 14D shows an example embodiment user interface diagram 1420 of a shopping cart review page. Here, users can view their virtual shopping cart, including product information, prices, totals and other offers such as shipping and handling information. Secondary offers can be provided as well for additional products. Users can adjust quantities by selecting buttons 1422 or continue to checkout by selecting button 1424.
[00152] FIG. 14E shows an example embodiment user interface diagram 1426 of a shipping information page. Here, users can enter personal information in various fields 1428, select options with buttons 1430, and continue by selecting button 1432.
[00153] FIG. 15 shows an example embodiment user interface diagram 1500 of an Operational Dashboard. As shown in the example embodiment, some or all Key Performance Indicators (KPI) are tracked and displayed for user review along with trending indicators. These include names of campaigns, CAP information, average conversion rates, earnings per click average, average take rate, revenue information, gross profit, top offers, top affiliates, missing stars, conversion rates, network sales volume, and various others, and may be done with respect to different timing periods, prices, quantities, tables, graphs, charts, and others.
[00154] FIG. 16A shows an example embodiment user interface diagram 1600 of a visual campaign performance at a glance, heat-zone report. As shown, indicators may have different sizes that indicates sales volume and colors can be used to indicate conversion performance ranges for a listing of different campaigns on a periodic basis. This can be daily, weekly, hourly, monthly, yearly, or others, as appropriate. Hovering over each heat-zone dot will reveal a more detail popup of campaign name, merchant name, average KPIs including: conversion, take rate, earnings per click, and others, for a user or system specified time duration.
[00155] FIG. 16B shows an example embodiment user interface diagram 1610 of a traffic dashboard. As shown in the example embodiment, live traffic information area 1612 can display information that allows monitoring of real-time traffic volume and sales performance for different individual advertisers or groups, on a time-period basis. For example, on a rolling twelve-hour cycle, hourly, or otherwise. This can be broken down into campaign levels and use different sizes and colors for easy visual recognition on the user interface. Broken traffic area 1614 can display information related to monitoring potential dead links. This can indicate links with no sales after a certain adjustable sample size has been reach during the last 5 minutes, 15 minutes, hour, or otherwise. Affiliate performance area 1616 displays real-time overall affiliate performance, including information related to sales for individual affiliates such as dates, sales amounts, and target levels or thresholds. When automatically compared with actual sales, these can trigger notifications that are displayed on the user interface screen. In various embodiments, different filters can be adjusted by selecting filter adjustments 1618, such as sliders, buttons, and others. As such, various thresholds can change the data being displayed in the different informational areas 1612, 1614, 1616. Various user interface boxes showing additional related information can be displayed when different selectable areas are selected by users.
[00156] FIG. 16C shows an example embodiment user interface diagram 1630 of an affiliate manager dashboard. As shown in the example embodiment, various KPIs can be displayed with hourly and multiday trending indicators based on monitored data. Examples of KPIs that can be displayed include: blended earnings per click across multiple offers or products, conversion rates, take rates between products or offers, sales volume information, and others. Listing of affiliate ID's, names, types of campaigns, managers, and other information is also shown. Numerical indicators such as quantities, currency amounts, and percentages for KPIs can be based on various time periods, such as last week, lifetime, previous thirty days, yesterday, today, hourly, and others. Arrows, colors, and other indicators can provide simple visual cues for users. It should be understood that sortable columns, arrangements, types of data displayed, scrolling, and various buttons can also be employed to form a rich data immersion experience for users via the user interface.
[00157] FIG. 16D shows an example embodiment user interface diagram 1650 of a retention report. This can also be considered a detailed affiliate or traffic performance report. As shown in the example embodiment, this can be done for trial offers, for example on a free trial for a specified number of days before a monthly charge and shipment begins. This can include early traffic quality indicators, such as credit card denials, pre-paid card use, cancellations, and others. Date ranges can be modified via selection or entering information in fields. Country, vertical, channel, campaigns, affiliates, sub affiliates, advertisers, and other selectable dropdown menus provide easy and intuitive controls. Percentages of orders, initial approval rate, net approved orders, fraud percentages, active net approved percentages rebill percentages, and other information can all be monitored on a day by day or other periodic basis. This report enables all actors associated with the campaign to monitor and take appropriate actions depending how the traffic quality is trending several days into the trial period without waiting until the end of the trial period which can be as long as 14 days, 30 days, or otherwise. This report adds the ability to preemptively throttle down bad traffic and scale up good traffic according to the targeted benchmarks or service level agreements. We know of no other affiliate/CPA network that has this visibility and capability today.
[00158] FIG. 17 shows an example embodiment user interface diagram 1700 of an affiliate dashboard on a mobile user device platform. As shown, aggregated real-time up-to-the minute performance reporting can be provided. This can include day-over-day hourly trending analysis, also known as day-parting. Targeted earning benchmarks can be set for or by the affiliate and the system can skip, manually and automatically, underperforming offers or designs in favor of better performing offers or designs. Offer caps, personal caps, offer pairing, total revenue, product specific (for example step 1, step 2, or other follow-ons) and blended earnings per click (EPC), step 1 conversion, total clicks, total sales, and take rate between products information can all be shown and color coding can be used to highlight or indicate real-time and day-parted trends and velocity.
[00159] FIG. 18 shows an example embodiment user interface diagram 1800 of an affiliate dashboard on a user device, such as a desktop computer. As shown, aggregated real-time up-to- the minute performance reporting can be provided. This can include day-over-day hourly trending analysis, also known as day-parting. Targeted earning benchmarks can be set for or by the affiliate and the system can skip underperforming offers or designs in favor of better performing offers or designs. Offer caps, personal caps, offer pairing, total revenue, step 1 EPC, total clicks, step 1 CR, blended EPC, total sales, and take rate information can all be shown and color coding can be used to highlight or indicate particular information. Also, users can interact with various buttons to like or love offers, request Cap information, skip screens, select offer types, and also view quantities and loyalty or gamified points. Affiliates can also clearly see offer flows.
[00160] FIG. 19 shows an example embodiment user interface diagram 1900 of an affiliate SubID report. As shown in the example embodiment, if an affiliate partner sends one or more SubID codes to the CAMP platform along with their traffic (also known as clicks), the system administrator or system engine can track and report performance at this granular SubID level that can indicate how particular media sources, content version, designs pitch angles, behavior economic techniques and other information is trending. This can be an automated process in some embodiments, which can include manual overrides and interrupts to customize the data. The report can also be operable to report whether certain traffic is meeting target levels of service that are expected by the system network or its clients, such as various merchants. As shown in the example embodiment, total revenue, overall blended EPC, overall step 1 conversion rate, total sales, total clicks, overall take rate percentage, and other information can be shown. Users can select offer funnels, personal performance, sub IDs, top offers, and other tabs to view. For the sub ID tab, users can select an offer pair, change timing intervals, devices types, and verticals, as well as download reports. Identifiers, sales numbers, clicks, step 1 conversion percentage, blended EPC, take rate percentage, and other information can be displayed for each offer pair, set, or group.
[00161] As used herein and in the appended claims, the singular forms "a", "an", and "the" include plural referents unless the context clearly dictates otherwise.
[00162] The publications discussed herein are provided solely for their disclosure prior to the filing date of the present application. Nothing herein is to be construed as an admission that the present disclosure is not entitled to antedate such publication by virtue of prior disclosure. Further, the dates of publication provided may be different from the actual publication dates which may need to be independently confirmed. Additionally, all publications discussed herein are hereby incorporated by reference in their entirety.
[00163] It should be noted that all features, elements, components, functions, and steps described with respect to any embodiment provided herein are intended to be freely combinable and substitutable with those from any other embodiment. If a certain feature, element, component, function, or step is described with respect to only one embodiment, then it should be understood that that feature, element, component, function, or step can be used with every other embodiment described herein unless explicitly stated otherwise. This paragraph therefore serves as antecedent basis and written support for the introduction of claims, at any time, that combine features, elements, components, functions, and steps from different embodiments, or that substitute features, elements, components, functions, and steps from one embodiment with those of another, even if the following description does not explicitly state, in a particular instance, that such combinations or substitutions are possible. It is explicitly acknowledged that express recitation of every possible combination and substitution is overly burdensome, especially given that the permissibility of each and every such combination and substitution will be readily recognized by those of ordinary skill in the art. [00164] In many instances entities are described herein as being coupled to other entities. It should be understood that the terms "coupled" and "connected" (or any of their forms) are used interchangeably herein and, in both cases, are generic to the direct coupling of two entities (without any non-negligible (e.g., parasitic) intervening entities) and the indirect coupling of two entities (with one or more non-negligible intervening entities). Where entities are shown as being directly coupled together, or described as coupled together without description of any intervening entity, it should be understood that those entities can be indirectly coupled together as well unless the context clearly dictates otherwise.
[00165] While the embodiments are susceptible to various modifications and alternative forms, specific examples thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that these embodiments are not to be limited to the particular form disclosed, but to the contrary, these embodiments are to cover all modifications, equivalents, and alternatives falling within the spirit of the disclosure. Furthermore, any features, functions, steps, or elements of the embodiments may be recited in or added to the claims, as well as negative limitations that define the inventive scope of the claims by features, functions, steps, or elements that are not within that scope.

Claims

1. A computer based system for performing automated, aggregated multivariate testing via a computer network, comprising: a network connected user affiliate device; and a network connected server based testing platform, comprising: a processor; and non-transitory memory, storing instructions that, when executed by the processor, cause the processor to: set at least one goal for an automated, aggregated multivariate advertising experiment, as defined by the user and received from the user affiliate device via the network; identifying at least one measurement or barrier for the test, based on at least one best testing metric stored in a non-transitory best practices database; construct at least one testing hypothesis; prioritize the goals, hypotheses, or both; design, execute, and measure a result of the experiment based on the prioritization; determine a winner based on the at least one best testing metric; update the best practices database; and transmit the winner for display on the user affiliate device.
2. The computer based system of claim 1, wherein designing the experiment further comprises: selecting at least one variable for monitoring.
3. The computer based system of claim 2, wherein the at least one variable is selected from: an elements variable and a scheduling variable.
4. The computer based system of claim 3, wherein the elements variables are selected from: email elements, analysis components elements, and landing pages elements.
5. The computer based system of claim 3, wherein the scheduling variable is selected from: number of conversions, number of variations, confidence level, amount of traffic, and lift percentage.
6. The computer based system of claim 1, wherein the server further comprises: a system management portal, accessible via the network by a system management device, that is operable to transmit system monitored-real time information for display at a user interface of the system management device in a system manager dashboard.
7. The computer based system of claim 1, wherein the best practices database comprises: taxonomic organization of experiment data based on at least one of: traffic information, template information, optimization variables, and consumer demographics.
8. The computer based system of claim 7, wherein the optimization variables further comprise at least one of: lead and presale variables, radical redesign variables, methods of influence variables, user interface variables, relational equity variables, and sub-section variables.
9. The computer based system of claim 1, wherein the processor of the testing platform is further operable to:
access third-party data from a third-party information database via the network; and use the data in the design and execution of the experiment.
PCT/US2017/039161 2016-06-24 2017-06-24 Automated aggregated multivariate testing systems, methods, and processes WO2017223547A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201662354415P true 2016-06-24 2016-06-24
US62/354,415 2016-06-24

Publications (1)

Publication Number Publication Date
WO2017223547A1 true WO2017223547A1 (en) 2017-12-28

Family

ID=60784005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/039161 WO2017223547A1 (en) 2016-06-24 2017-06-24 Automated aggregated multivariate testing systems, methods, and processes

Country Status (1)

Country Link
WO (1) WO2017223547A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220129920A1 (en) * 2020-10-28 2022-04-28 Shopify Inc. Methods and Apparatus for Maintaining and/or Updating One or More Item Taxonomies

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050289005A1 (en) * 2004-05-18 2005-12-29 Ferber John B Systems and methods of achieving optimal advertising
US20090063213A1 (en) * 2007-08-30 2009-03-05 Jay William Benayon Generalized parametric optimization architecture and framework
US20120004980A1 (en) * 2010-07-02 2012-01-05 Yahoo! Inc. Inventory management and serving with bucket testing in guaranteed delivery of online advertising
WO2015103698A1 (en) * 2014-01-11 2015-07-16 Vantage Analytics Inc. System, method and/or computer readable media for data ingestion, cleansing, transformation, visualization, insight generation, recommendation engine-driven actions, advertising deployment and optimization
US20150213389A1 (en) * 2014-01-29 2015-07-30 Adobe Systems Incorporated Determining and analyzing key performance indicators
US20160034972A1 (en) * 2008-07-15 2016-02-04 Ross Koningstein Generating and using ad serving guarantees in an online advertising network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050289005A1 (en) * 2004-05-18 2005-12-29 Ferber John B Systems and methods of achieving optimal advertising
US20090063213A1 (en) * 2007-08-30 2009-03-05 Jay William Benayon Generalized parametric optimization architecture and framework
US20160034972A1 (en) * 2008-07-15 2016-02-04 Ross Koningstein Generating and using ad serving guarantees in an online advertising network
US20120004980A1 (en) * 2010-07-02 2012-01-05 Yahoo! Inc. Inventory management and serving with bucket testing in guaranteed delivery of online advertising
WO2015103698A1 (en) * 2014-01-11 2015-07-16 Vantage Analytics Inc. System, method and/or computer readable media for data ingestion, cleansing, transformation, visualization, insight generation, recommendation engine-driven actions, advertising deployment and optimization
US20150213389A1 (en) * 2014-01-29 2015-07-30 Adobe Systems Incorporated Determining and analyzing key performance indicators

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220129920A1 (en) * 2020-10-28 2022-04-28 Shopify Inc. Methods and Apparatus for Maintaining and/or Updating One or More Item Taxonomies

Similar Documents

Publication Publication Date Title
CN108476334B (en) Cross-screen optimization of advertisement placement
US20210105541A1 (en) Yield optimization of cross-screen advertising placement
US9904930B2 (en) Integrated and comprehensive advertising campaign management and optimization
US20210185408A1 (en) Cross-screen measurement accuracy in advertising performance
US20220321937A1 (en) Systems and methods for web spike attribution
US20160210657A1 (en) Real-time marketing campaign stimuli selection based on user response predictions
US20140046777A1 (en) Methods and systems for using consumer aliases and identifiers
US9413559B2 (en) Predictive analysis of network analytics
US20140058826A1 (en) Composite publisher audience profiles in comprehensive advertising campaign management and optimization
US20120271709A1 (en) Integrated and comprehensive advertising campaign visualization
US20130054349A1 (en) Integrated and comprehensive advertising campaign remap
US20160210661A1 (en) Managing digital media spend allocation using calibrated user-level attribution data
EP2606459A2 (en) Unified data management platform
Oklander et al. Analysis of technological innovations in digital marketing
US20170300939A1 (en) Optimizing promotional offer mixes using predictive modeling
US20160210656A1 (en) System for marketing touchpoint attribution bias correction
US20120158487A1 (en) Simulations in integrated and comprehensive advertising campaign management and optimization
US20120158486A1 (en) Profiles, templates and matching in integrated and comprehensive advertising campaign management and optimization
US20130325589A1 (en) Using advertising campaign allocation optimization results to calculate bids
US20130282476A1 (en) System and method for determining cross-channel, real-time insights for campaign optimization and measuring marketing effectiveness
US20120271708A1 (en) Integrated and comprehensive advertising campaign selection and implementation
AU2021257902A1 (en) Methods and systems for using consumer aliases and identifiers
US10672035B1 (en) Systems and methods for optimizing advertising spending using a user influenced advertisement policy
WO2017223547A1 (en) Automated aggregated multivariate testing systems, methods, and processes
De Reyck et al. Vungle Inc. improves monetization using big data analytics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17816364

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17816364

Country of ref document: EP

Kind code of ref document: A1