EP2561473A2 - Procédé et appareil d'optimisation de campagne et d'inventaire - Google Patents
Procédé et appareil d'optimisation de campagne et d'inventaireInfo
- Publication number
- EP2561473A2 EP2561473A2 EP11772532A EP11772532A EP2561473A2 EP 2561473 A2 EP2561473 A2 EP 2561473A2 EP 11772532 A EP11772532 A EP 11772532A EP 11772532 A EP11772532 A EP 11772532A EP 2561473 A2 EP2561473 A2 EP 2561473A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- campaign
- data
- campaigns
- cube
- dimensions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
Definitions
- Patent is related to U.S. Patent Application No. entitled "Method and Apparatus for
- the present invention pertains to advertising. More particularly, the present invention relates to a method and apparatus for campaign and inventory optimi/ation.
- Advertisers place an advertisement (ad) or advertisements (ads) to attract users. If these ads are not acted on by the user then they may represent a waste of money and/or resources. This presents a problem.
- Figure 1 illustrates a network environment in which the method and apparatus of the invention may be controlled
- Figure 2 is a block diagram of a computer system which some embodiments of the invention may employ parts of;
- Figures 3 - 63 illustrate various embodiments of the present invention.
- optimization is separated into creative and inventory optimization.
- the present invention is directed toward the inventory optimization issue.
- inventory optimization is competition between generations of rule sets.
- inventory optimization takes into account countries.
- inventory optimization takes into account rule sets that run recursively.
- countries different inventory, and different slices of inventory are taken into account.
- rule sets may be specified using non-technical terms (e.g. sex (gender), age, income, time of day, etc.).
- predicted revenue per thousand impressions may be modeled as a "Cube" having vectors.
- the pRPM vectors may be time and data (e.g. hours, days, weeks, country, ad slot position, frequency, gender, etc.).
- inventory optimization involves minimizing money and/or time spend on learning rules.
- inventory optimization involves maximizing sales by showing the right inventory at the right time (e.g. via learned rules).
- risk management is key to success. For example if an ad network buys and sells at CPM there is little risk and their value-add is the sales force. Buying at CPM and selling at CPA or Rev Share entails greater risk/reward and value-add is the technology required to optimize and control risk. Profiting from risk requires both Optimization and Stringent Risk Controls.
- optimization is based on HIGH VELOCITY COMPETITION BETWEEN SUCESSIVE GENERATIONS OF f[x]. Where the functions (f[x]) optimized cut across various planes, for example, Creative/Content Optimization, Inventory Optimization, Product Optimization, and Offer Optimization.
- Inventory learning takes place in only cheap "representative pockets", for example, say the 4th- 10th frequency only in the Midwest and only for Publisher X,Y and Z who represent average inventory for 3 different types (say games, quizzes and news). If learning is positive, then we scale to more data points before promoting to the scaled optimizer (e.g. learned rules).
- post sales opportunities are combined across different times to create vectors (e.g. the ROI report) that gives us user values that we underwrite to for certain inventory slices. This goes back into the ad-server optimizer as pRPM calculations.
- FIG. 3-63 illustrate embodiments of the invention.
- a first step is to get actual data about visitors, what they saw, how much was money made, etc. Take all this data and load it into a data warehouse (DWH) where the data warehouse is structured such that dimensions represent a cube.
- DWH data warehouse
- a star-schema may be used. That is for each thing being measured, it represents a dimension. For example, but not limited to, visitors as male or female represent a dimension, age may be another dimension, time of day another dimension, the country the visitor is in, etc.
- step 1 is get actual data into the DWH.
- Step 2 - is a high velocity campaign/inventory optimization where we are testing different rule sets that run the campaign and inventory. Rule sets are competing against each other.
- a rule set consists of multiple pieces of definitions. There are two that are very important, first a vector representing the dimensions that we are going to use in a cube, and second, a test or formula that we will use to decide if we believe a given cell or not.
- the rule set is going to contain multiple things, first is an enumeration of the vectors of the dimension we choose to use and the order of use. A shorthand for this may be, for example (as seen in some of the screen shots), a country, size, platform, publisher, size of slot, session depth, by time, etc. This is shorthand to specify a vector that says to first consider country, then size, then platform, then publisher, then size of slot, then session depth all by time, for example 24 hours, and if for some reason we don't believe in it (which is the second important thing i.e.
- a site e.g. website
- a publisher includes multiple sites, so it is reasonable to say we're most likely to believe data from a given slot but if we don't have data from a given slot that alternate slots within the site more or less behave similarly, or for a publisher all sites of a publisher behave similarly, or all sites for a publisher on a given platform behave similarly, or all publishers of a given size behave similarly for a specific country. That is the technique disclosed of dropping dimensions is to get to believable data.
- star schema is used.
- the star schema is composed of facts or metrics and dimensions (see for example,
- the dimensions are the facets of a cube and what is within a given cell are the facts or metrics that relate to the dimensions. For example, dimensions may be how many counts, how many impressions, how many clicks, how many conversions, how many dollars did we spend in cost, how many dollars we got in revenue, etc.
- the DWH represents a massive cube of events that actually happened and we want to get a smaller cube because we want to generate a predictive cube as fast as possible based on the historical massive cube. That is we want to manipulate the historical data to get a forward-looking statement.
- the dimension browser may not be included. That is a prediction will not be made against browser. While in one approach a minimum number of dimensions in a predictive cube may be a goal, it is not the only approach. In another approach the goal is to get down to something that has large enough numbers for accurate predictions. Again the balance is between resources, such as computing resources and time deadlines and funding for finding the goal. Because each impression costs actual real dollars this is not an academic exercise. While the "historical" impressions have already been paid for, if they do not tell us anything or yield a prediction that is significant then we have to spend more actual dollars for impressions that will yield a prediction that is significant. For example, phrased another way how do we go about making a prediction based on 45 million impressions rather than 1 trillion impressions.
- Step 1 the data warehouse (DWH)
- Step 2 - start creating the rules.
- the first step in creating the rules is to define a set of dimension vectors. We move from a dimension vector to another dimension vector as long as we believe we do not have significant data. That is, we go from one point in a vector to the next because we have failed a data significance test. Eventually we get to a dimension set where we believe the data. WE then stop and in one embodiment, retrofit the data into the cell in question. For example, suppose we have the simple situation where we have a publisher, a slot, 24 hours (worth of data), and one week (worth of data). This yields a 4 dimensional cube. So we could describe this as publisher by slot by 24 hours, and publisher by slot by 1 week. So inside this we need to place data (numbers), the most important being predicted RPM (revenue per thousand impressions) (pRPM).
- RPM revenue per thousand impressions
- publisher #7, slot #5, and predict $1.50 may note as well in the cell that we dropped the slot dimension, note in the cell that no correction factor was applied, and note in the cell any other information we wish.
- the rules for picking a campaign may decide based on factors like believability and so may apportion a slot among various campaigns.
- rule sets are run side by side and we look at their RPM's. Based on this we run another iteration looking for a rule set to best the current winner. This is referred to as high velocity competition.
- the side by side running is done in real time on users via an A/B traffic split. For example, real traffic is taken and randomly split and the testing is done on each split.
- rule sets are competing against each other for a given campaign. At a "higher" level campaigns may also be competing against each other.
- a campaign optimizer has a set of predictions that allows it to pick the best performing campaign for a piece of inventory.
- rule set #7 has publisher x X x Y x Z where X, Y, Z are not gender.
- One looking at this might say "Hey I think rule set #7 is underperforming because it's not taking into account gender. I'm going to create a new rule set #8 that takes gender into account.”
- Now rule set #8 might be publisher x X x Y x Z x female gender, where X, Y, Z are not gender.
- rule set #8 wins over rule set #7 then that was a good decision. If it loses then it was not a good decision (it might be that the gloves look neutral and thus appeal to all genders (male, female, unknown)).
- rule set contains much more, for example how to determine a winner and a loser.
- the thing that competes with each other are campaign rule sets.
- the thing that competes with each other are creative rule sets. That is creatives. Creatives are considered first order things, whereas rule sets are second order derivatives. Recall it's not the campaigns that are directly competing against each other but rather the rule sets that are driving a campaign that are competing.
- a key part of a system is a universal placement engine that allows you to take any transaction (e.g. an advertising transaction) and model it as a whole series of decisions about what to do with traffic. And once you can model and measure it that way then you can begin to optimize each one.
- the vectors to use could be random. However, for a given approach one could say that while they are random, we believe that age is really important, so in this case we would allow the dropping of dimensions except for age.
- the vectors are enumerated from the most granular to the least granular. We can enumerate them using another rule set or we can use shorthand to enumerate them. So we first enumerate the possibilities that we want to consider. Generally the finite set of enumerations will be done a person who is running the tests. For example a DMA (direct marking associate) user named "Chris" may decide to run a test.
- DMA direct marking associate
- Chris may say "I've looked at this cube, I've looked at the money it's making, I've looked at the details of decisions it's making and I think I want to consider gender. " That is the user has decided that gender should be considered. While the techniques discussed put machinery in place to crank though the process of considering an idea, for example, such as gender. The idea may come from a user rather than as a random pick of one of the variables, dimensions, etc. available.
- the system is running along with a rule set that is the control set because it's winning but it does not consider age and the user believes that age is important and if taken into account we could make more money.
- a rule set that considers age is generated (with the attendant pRPM, believability, etc.). It is important to understand that the rule set that considers age is generated by the machine based on all the factors and techniques discussed, however the pRPM is not looked at to determine whether to run the rule set or not, rather whatever the result of the many pRPM calculations that may be done to consider gender are used to select candidates to test and it is these candidates that are then run against the currently running rule set in an A/B test. The winner is which generates more money in a real time contest.
- a machine such as, but not limited to, a server must serve up 10,000 ads.
- the machine must decide which ads to serve up.
- the machine uses the pRPM as a basis for which ads to serve.
- the machine can be a cluster of machines. [0068] So the machines must decide when passing the first gate, what campaign, what advertiser, is most likely to make us the most money if we show it here.
- rules set are also called algos or algorithms. Now for the A/B test, we may decide to split the traffic 80%/20% (denoted 80/20) with 80% of the traffic going to the control rule set which is the current winner. For example, in the consider the gender case, we might be really evident to use even a 50/50 split until we know that the actual revenue from gender is greater than the non-gender case. Thus, an 80/20 or even 90/10, 95/5, or more likely a 99/1 split may be desirable.
- Algos have a rule set which is comprised of dimension vectors, significance test, significance thresholds, selections rules, etc.
- a traffic split decision may be made by a human using the universal placement server which takes the traffic and splits it based on rules. For example a really simple rule would be to take all the traffic and do a 98.6/1.4 split.
- slots may be purchased in advance, advertisers and publishers may be secured and campaigns then designed based upon the further constraint that an advertiser is only willing to pay for conversions. Within these constraints is where we maximize our revenue. So for example if an advertiser is willing to pay $10 per conversion and we can generate that conversion for $7 we make $3 per conversion. However, this is not a very compelling approach. A more enticing approach is where the conversion is still worth $10 to an advertiser but we only charge them $5 for the conversion. Clearly it's a no-brainer to sign up for this approach as there's nothing to lose and everything to gain. So how do we make money? Simple, generate that conversion for $2 and we make $3.
- the yield curve is defined as the ratio between the thing that you get revenue for and the amount you pay out for the thing.
- CPM cost per thousand impressions
- CPC cost per click
- CPA cost per action/conversion
- the CPM yield curve by definition is 100%. That is if you're buying an impression and selling an impression the yield is 100%.
- a CPC yield curve might easily be in the range of 1 in 1000 to 1 in 10,000, or more or less, clearly much less than 1 in 1 (100%). And CPM can be easily one or more orders of magnitude less than CPC. So continuing the example, the yield curve is conversions divided by impressions (conversions/impressions) which could be, for example, 1/10000.
- the price is the price at any given time since price may also vary. So for example if yesterday we are getting paid $1.00 for something and today we are getting paid $1.20, while the yield curve has not changed the price today is 20% more attractive. So we take the current price and multiply it by the historical yield curve. Thus the pRPM can vary based on this. This is also a reason it is important to separate the yield curve from the price. Realize that the yield curve is not sensitive to the price, rather it is the responsiveness of the audience to that which is being promoted. Stated another way, the yield curve is the historical tendency of the audience or visitors to click. We are talking about a click yield, which in the industry is often referred to as CTR (click through rate).
- CTR click through rate
- a campaign that is promoting a muscle car in a man's online magazine or website may have a higher CTR than the same campaign in an online music magazine or website.
- Session depth is also known as frequency. Slot frequency is how many times a given visitor has looked at, or seen, or had presented a given slot. It is a measure of distractibility. For example, upon first visiting Yahoo's home page (as measured in say a 24 hour period) there may be a Medium
- Rectangular slot of 300x250 pixels and so this would be a session depth of one or a slot frequency of one. Now if you hit the refresh button this would be a session depth of 2 or a slot frequency of 2 for that Medium Rectangle (mrec).
- session depth is important because different ads can be placed in this mrec depending upon the session depth. For example, it is reasonable to assume that on your first visit to a new page it is more likely a user will look at an ad in a slot, than on the 2nd, 3rd, 4th, etc. visit to the same page in the same slot. That is the user is more likely to ignore the ad in the slot on repeated visits to that page.
- the distractibility versus slot frequency curve need not, and in fact, generally is not linear. If your distractibility at slot frequency 1 is normalized to 1 , then at slot frequency of 5 it might be 0.7, and at slot frequency of 10 it might be 0.05. Nor does the distractibility curve need to be monotonic. It may well have several peaks and valleys. For example if the first slot frequency is going for $2.50, the 5th slot frequency might be $1.00, and the 10th slot frequency might be $0.10. Thus there is a wide variation, and therefore session depth is an extremely important dimension.
- the invention is not so limited, and in fact the mrec is an ad unit that may in fact be on different web pages.
- session depth can be a very important factor in a rule set. Therefore if session depth as a dimension is dropped it is very likely that we will need to apply a correction factor to the resulting calculations to try and compensate for the lack of this dimension in the rule set. Now this correction factor can be derived from a historical perspective across for example campaigns and then adjusted by another correction rule and then applied.
- the "historical correction rule” is just another rule set and is subject to the same testing for "believability" as any other factor. So for example, the historical correction rule might not be believable in which case the rule might be to discount it by a factor of two.
- the correction factor is in the range of 0.05 to 1.0.
- the cell should also contain a record of how it was calculated, how many dimensions were dropped, what was the time frame, etc. The idea is that we need
- the correction factor is defined infra, however, the entire correction factor is generally between 0 and 20.
- the user could look at the performance based on a time period for clues. For example if the accuracy of the prediction is 88% over a 24 hour period but drops to 77% over a 3 day period and to 70% over a 7 day period then the user knows the time period affects the accuracy. The user may try and see if time segments in the 24 hour period are more accurate than others and use this to improve the bottom line. That is let the rule sets compete in this case, the control at say 24 hours against others that have a shorter time period.
- the objective is to populate each cell.
- We have our set of vectors we start with the first vector and we get a number and we run our significance test and it passes or it fails. If it passes we do the next vector. If it fails we move to the next point on the vector (e.g. reducing dimension) and repeat the process till we have something significant. We do this for all the vectors and we have the cube built.
- the algo in one embodiment of the invention is comprised of multiple parts a) the vector, b) the significance rule, c) the secondary engine, d) etc.
- the secondary rule engine takes the predictions as inputs and outputs percentages.
- the secondary rule engine also consults the learning engine.
- the learning engine works by modeling the new campaign by looking at prior campaigns and applying a learning factor. For example, the learning engine would look at current campaigns 1 through 4 and say "I'm going to model the new campaign on 70% of the average of campaigns 1 through 4" (i.e. 0.7 x (campaign 1 + campaign 2 + campaign 3 + campaign 4) / 4 ). Thus the modeling in this case is looking at a basket of campaigns and subsidizing the new campaign based on the basket. Note that in one embodiment of the invention the basket of campaigns used for modeling the subsidy (the learning subsidy) is determined to be similar to the new campaign.
- the basket may contain other campaigns for clothes such as pants, shirts, belts, shoes, etc. but is very unlikely to contain campaigns for archery, motor oil, cars, power tools, pool covers, etc.
- the learning factor can be greater or less than one. That is it might be 0.5 or 2.0, etc.
- the basket serves an additional purpose - that of providing an idea where the new campaign should be placed. Again continuing with the sock example, it makes sense that where the shoe ads are being placed may be a more appropriate location for socks and more likely successful than the location for motor oil.
- each modeled basket campaign has its own learning factor weighting.
- the model for the new campaign might be 0.7 x campaign 1 + .45 x campaign 2 + 1.34 x campaign 3 + 0.17 x campaign 4. That is a learning factor weight is given to each modeled campaign in the basket. In this way weights may account for believability, similarity, etc. For example, continuing with the socks example, a higher weight might be given to a campaign for shoes because socks are used with shoes than to a hat campaign.
- the actual first step in the learning engine is to see if the campaign needs to be subsidized at all. That is, the optimizer might actually have a position on this issue, such as I know about this campaign. So the learning engine has a rule that describes what it means to be learned. For example, if after a campaign run we find that zero dimensions are dropped then the campaign can be considered learned.
- Another possible learning control is to limit the learning to a time period. For example, stop learning after 24 hours, stop learning on April 01 , 201 1 , etc.
- the learning engine checks to see that the campaign being modeled is enrolled in the learning engine, has not exceeded any learning limits, is based on a basket model, etc.
- the invention is not so limited.
- the learning engine could look at the rate of learning and if the campaign is being learned very rapidly it could decide based on the believability of this to cut off the learning early to conserve subsidies.
- the number of dropped dimensions could be the criteria for being learned. We have talked about no dropped dimensions being 100% learned, which is a simple example. However, the invention is not so limited and "learned" could also be something like only 10% of the dimensions have been dropped, or only 2 dimensions have been dropped, or dropped dimensions are being decreased at a believable rate to achieve 90% of the dimensions within the next 10 minutes, and so it can be considered learned.
- the learning being disclosed here is not the advertiser funded learning budget approach such as a CPM campaign where the advertiser pays to have a campaign run, after it is run, then gets the results and then possibly runs another campaign.
- a dollar limit or hard cost is what it costs for us to pay for impressions, etc. in order to learn. These are hard costs for example, for slots, etc. They are irrespective of what we place there and therefore fixed costs. They are always positive, meaning we are paying money. Opportunity costs are what we stand to lose or gain versus something else that could have been taking place instead of the learning. So for example, suppose we are running a campaign 43 which is netting us say $1 per impression. We now substitute a learning campaign into the slots, placements, etc.
- the cell has the information on the campaign and the associated information (learned, hit a limit, not learned, etc.), and the believability (believable, not believable), etc., and can now be used by the optimizer to compete. It may well be that the optimizer does not pick this new campaign, however that is up to the optimizer. What is to be appreciated is that the new campaign has been subsidized to a given level (learned, hit subsidy limit, etc.) to give it a chance to compete with other campaigns.
- the significance test can be as simple as noted above where the example was If CPA campaign and the conversions are less than 10 then it's not significant Unless impressions are greater than 100,000. Or the significance test can be a statistical test such as a two-tailed Z test, etc.
- each cube has time data associated with it, for example start time. That is, for example, cube #3 could start at 0300 and finish at 0700, cube #27 could start at 0230 and end at 0600, cube #4 could start at 0100 and end at 1200, cube #32 could start at 0900 and end at 1000, cube #99 starts as 0630 and ends at 0930, cube #104 starts at 0500 and ends at 1300, etc.
- a universal placement server is used, for among other things, serving up the A/B test.
- the universal placement server is a machine that allows you to take any traffic anywhere, split it by any kind of rule, and measure the results. This allows for optimization.
- the presentation of the actual creative can be optimized (i.e. creative optimization), as well as the offer (the offer is on a landing page and so landing page is understood to refer to the offer and vice versa) or landing page.
- landing page optimization is similar to creative optimization but we're applying optimization to landing pages rather than ads.
- the universal placement server let's you take any piece of traffic coming in and create as many optimization points as you like. So these are placements or placement tests that can be modeled.
- the placements can be modeled it as comprised of a slot, which goes into a rotation, rules which have campaigns, which have locations of ads, which has an ad, which has a piece of content as an asset, which takes you to a landing page, and sells you a product. So we've taken one interaction and made a series of placements. Now we can describe how traffic flows from one placement to the others. We get to measure it and then we can answer the question how did a slot do compared to a control on, for example, conversions? Or how did this campaign do against a target? Or how did this ad do against my target? Or how did this asset perform against my target, etc.? This allows us to try to optimize it.
- FIG 57 shows how you went into slot rotations. So you are describing how traffic flows, for example, under these conditions go 100% of the time here, under these conditions go here, under these go here, etc., etc., etc.
- Figure 56 also shows the universal placement server. Where for example, in this country send 95% of the traffic this way, 5% this way, and 0% this way.
- Figure 54 also shows the universal placement server, as does Figure 53, Figure 52, and Figure 51.
- the universal placement server knows about traffic, ads, results, publisher, slots, as well as landing pages, campaigns, assets, rotation of campaigns, etc.
- the universal placement server does deploying, and rotating, and tracking, and reporting, and can roll back for not only ads but anything else, both visible and not visible. For example whether that thing is an ad, or a headline within an ad, or a landing page, or a product bundle, or a trafficking rule set (which is not visible to the eye), in other words any asset.
- placements for example, 5 trafficking rule sets and attach them to placements in the universal placement server and deploy them and rotate them, report about them, etc. and then see for example, that rule set 4 is producing more revenue than rule set 3.
- the universal placement server is able to track actions, etc., based on an ad tag being invoked by a browser.
- ad tag is invoked by a browser things can happen that allow the universal placement server to take measurements, get results, etc.
- the universal placement server is capable of driving traffic through a website using open ended rules, and measure the result of who looked at it, what the piece of content was as against your objective, etc.
- an ad rotation just renders from an ad, whereas an ad renders a piece of content, user interacts with it and goes onto a landing page. Then there may be a landing page rotation. So there are several pieces.
- the ability to rollback may be needed, for example, if a landing page is performing badly. We would simply rollback and try another landing page. Additionally, from the timeline of the transaction there is the ability to not only rollback but to also rollforward.
- the universal placement server allows for the gathering of information on which we can also perform optimizations.
- the system or machine may be considered to be comprised of multiple sequential gates. Each gate represents a decision that must be made. Each gate is sequential to the previous gates in time. A visitor may enter the machine at any gate, but entering through any other than the 1st gate requires that the appropriate decisions be made external to the machine. We can see the act of passing each gate as reducing a degree of freedom possible in interacting with this specific visitor for this specific transaction.
- Each "visitor” is our representation of a distinct human being who is potentially capable of becoming a customer for one or more of our advertisers. We interact with visitors in sessions, and in transactions. Each pass through Gate 1 starts a new transaction. Each session is started by standard browser mechanisms.
- VisitorDB virtual "Visitor Data Store”
- US 2002/0174094 Al Patent Application Publication Number: US 2002/0174094 Al
- VisitorDB virtual Data Store
- the VisitorDB must return data within 100 milliseconds. Any longer and there will not be time to process the RTB request or the ad will be delayed in a monetarily significant way. To this end the immediate data stores are prioritized over the persistent store.
- the 3 data stores are:
- Cookies We store information about this visitor is encrypted cookies in his browser. This forms a very efficient and highly distributed database, where each visitor bring along his own information about who he is, how many and what ads and campaigns he has seen before, what he has purchased or clicked on before, what targeting vectors exist to describe him and so on. While a highly efficient mechanism, cookies are not accessible in RTBs
- Publisher provided data Publishers sometimes provide data about the user. This often includes data not available to anyone else. Such as user age and gender. User interests and so on. When provided it is copied into the distributed visitor database (DVDB) for further use and cross referenced to the publisher ID for this user (each publisher has a separate system of assigning unique IDs to the user, we can simply this to a vector on n-unique 128 bit GUIDs, one for each publisher).
- the data is provided by the publisher explicitly (as parameters) and sometimes implicitly. If implicitly (i.e. the ad buy is parameterized only a a certain user demographic) the Demographics Data Enrichment Service translates the ad buy into standard user characteristics.
- DVDB Distributed DB
- the DVDB data store is the only one to be guaranteed to be persistent. However, as it is very hard to assure scalable performance for our scale (400+ million unique visitors) at the 100 millisecond timeout, it is supplemented by the other stores.
- the DVDB is implemented as replicated multi- node NON SQL database similar to Apache Cassandra. The data is automatically replicated to multiple nodes for fault-tolerance, and each node is physically close to each ad server. Failed nodes are bypassed and requests proxied to live nodes, while the failures are repaired.
- a frequency counter cache In parallel to the user/session/transaction is retrieval from VisitorDB, a frequency counter cache (FCC) is established.
- FCC frequency counter cache
- this is critical information to have that is not provided by the largest exchanges such as the Google AdX. Because display advertising is all about distracting the user from what he is doing on the web site, the first impression is significantly more valuable than the second.
- the 10 th impression may be worth only as much as l/100 th of the 1 st .
- VisotorDB data is supplemented by a series of translation services. These translate one piece of visitor data into other pieces of data that are more actionable for targeting purposes. Translation services include
- Geo Service This translates the visitors IP address into country, MSA (metro service area), city, state, ZIP and lat/long centroids.
- the service is also responsible for mapping specific ad buys where demographic data is available as part of the targeting criteria (e.g. Facebook direct) to permanent storage in the VisitorDB (including cookies).
- the visitor may belong to one or more "standing"
- targeting vectors meaning that an advertiser would like to target or exclude the visitor specifically (remarketing based on prior behavior, or exclusion if already a customer)
- the trafficking engine provides a way to define rules that drive traffic.
- the rules can be applied left to right (literally defined how traffic flows) or right to left (by placement eligibility).
- the rule engine can implement manual learning and other exceptions.
- Eligible campaigns are compared against the list of temporarily ineligible campaigns broadcast by the campaign controller.
- the campaign controller is implemented as a series of independent nodes that maintain aggregate stats on the campaign level in near real time. They broadcast STOP requests to all ad servers via a message queue. They also broadcast PACING instructions. The reasons a campaign needs to stop are as follows
- Pacing Controller broadcasts pacing instructions. It measures the amount of budget currently spent against elapsed time (assuming a daily budget cap) and then gives out a percentage at which the ad-server can serve (1-100%) that campaign. It then sets a CPM floor against which pRPM is measured. The floor is calculated by setting a floor and looking at the historical performance. If the floor was not enough to spend the daily budget, then the next day the floor is incremented. This assures that the limited number of campaign impressions are spent only in those slots where the end results (the RPM) is highest. This ensured higher overall monetization for the network. Put plainly, the idea is that scarce campaign impressions are saved for those slots where we make the most money.
- a campaign is deemed not-eligible if it does not have any eligible creatives (as a single campaign can have creatives serving multiple sizes or publisher requirements).
- Creatives each have their own eligibility rules (by size, by what the publisher allows (e.g. animation, content, sounds, etc)). These rules need to be checked before a campaign is selected, otherwise it is possible to select a campaign that has no eligible creatives (forcing backtracking, which is not efficient). Eventually eligible traffic is sent to one or more (typically more than one, based on rules or random weights) placements representing different configuration of the
- Each configuration has its own data cube, and its own set of auction rules. These placements compete with each other over time. Those with higher RPMs are promoted, the others discarded.
- Each campaign placement (or another optimizer placement) has an entry in the cube for each cell (where the cell is defined based on the data above).
- the job of the optimizer is to pick the placement predicted to perform the best (within a given range and confidence interval). It does so as follows by looking a two tailed distribution between each campaign:
- the random variable Z is dependent on the parameter mu to be estimated, but with a standard normal distribution independent of the parameter mu
- estSE sqrt ⁇ (s_l ) A 2/N_l + (s_2) A 2/N_2 ⁇
- estSE sqrt ⁇ s2/N_l+s2/N_2 ⁇
- s2 ⁇ (N_l -l)*(s_l) A 2 + (N_2-2)*(s_2) A 2 ⁇ / ⁇ (N_l— 1)+(N_2— 1) ⁇
- control is passed to the learning engine.
- learning engine sees if a substitution to the winning campaign is necessary.
- the substitution is based on the need to learn to see how new campaigns will perform.
- a new campaign is mapped to a weight adjusted bucket of existing campaigns. It can serve instead of the winning campaign based on the weights assigned until learning is turned off.
- Learning is defined OFF if the opportunity costs for this campaign is exceeded, or its learning impression budget is exceed or it is in fact learned at the given cell (i.e. it was considered by the optimizer and either selected as a winner or discarded on its own merits).
- the winning campaign is selected as the outcome of Gatel .
- a bid CPM associated with this campaign is retrieved from a cube representing the bid-dimensions. In RTBx case the bid CPM is transmitted to the exchange. Skip to Gate 2, but logically the next step is: The content of the transaction is written out to the Measurement Service. The transaction is represented in 5 parts
- Bid Request. Bid Response CPM
- Bid Won Price, in case of 2 nd price auction
- Impression cost
- Impression Load time to load, goes into the transaction definition as Ad Load Time).
- Gatel there is a real-time transition between Gatel and Gate2 inside the same machine.
- a specific campaign e.g. in Advertising.com, Yahoo Yield Manager or MSN.
- the 3 rd party's ad server sells us a campaign but the creative is defined as our ad call tag.
- campaign promotes a specific product at a specific price/terms/targeting combination.
- Creatives promote that product irrespective of the campaign specifics.
- creatives are organized by product, user language and size ("size” is really a description of physical attributes, so movies can be represented as having size "movie-30 seconds” and banners can be represented as "728x90" - the key is that the size of the creative match what is accepted by the slot).
- a single rotation contains creatives (at the same level of product/language/size) that can compete against each other. Creatives are marked according to whether they are testing a new concept (concept) or a new variation on an existing concept (variant).
- ads are run randomly (with weights provided by the system or the marketing analyst) for each tier in the ad rotation frequency.
- test-reason associated with it (e.g. "testing headline FREE versus COMPLIEMNTARY"). The reason associated with the winning variant is recorded. The marketing analyst is then prompted to perform this test on rotations where this particular reasons has not been tried yet 2.3.3 Concepts are archived for future resting. A small percentage of traffic (analyst controlled) is allocated to retest older concepts.
- Ads are checked for eligibility. Ineligible ads are discarded. If no ads are eligible, the campaign in question is also not eligible.
- the system is provided with a set of potential cube-dimensions in which the ads may
- one set of dimensions involves the nature of the inventory (slot/site/publisher) and other involves how distractible the user is (rotation-frequency or slot-frequency).
- the system tests if the winning ad behaves the same in each of the cube dimensions. If different cells in the cube have different winners, the system will repeat 2.3.1 for each SET OF CELLS (rather than for the system overall).
- a landing page follows immediately after the click on the ad.
- the LP in fact represents an entire series of pages (or user experiences) that is presented to the user. There is a user initiated transition from the ad to LP1 and from LP1 to any subset discreet user experience (LP2, LP3, ... . LPn).
- the process of selecting the LP(x) is recursive, so we refer to the gate as selecting the LP we really mean a recursive selection of LP 1, LP2, ... LPn.
- selecting the LP may use the same approach as ad selection.
- the LP exit may be optimized and cross sell and up sell opportunities are presented for further conversion possibilities.
- selecting the LP exit and cross sell and up sell may use the same approach as a campaign selection.
- selection of the product is possible as well as order configuration.
- selecting the product and order configuration may use the same approach as a campaign selection.
- the email is sent to subscribers based on rule sets for optimum follow up, etc. (e.g. not a time of month when rent is due).
- selecting who to email and when may use the same approach as ad selection may use the same approach as ad selection.
- the selections of payment options is presented.
- selecting the payment option may use the same approach as a campaign selection.
- the rule-set Define a rule-set that you intend to test.
- the rule-set start with (a) vector of dimensions and (b) a significance test for deciding whether a given cell has data you intend to use/believe or not.
- the dimension vector can be expressed using a shorthand notation that looks something like this
- Each point in the vector indicates a combination of dimensions to calculate metrics from using a necessary correction factor. Note: that the points in a vector do not need to be consistent. For example, we may never want to drop Age as a dimension until we get to country. So the shorthand is provided only for convenience of notation, and is not a computational restriction.
- the next step is to use the data in the D WH and the vector definition to calculate a predictive cube.
- the cube is equal in dimension granularity to the maximum set of non-time dimensions. So in the example above each cell in the cube would have granularity of :
- Each cell in the cube contains N entries corresponding to the number of campaigns you wish to optimize. So again, by example:
- the metrics for each entry in each cell contain the following
- Predicted RPM which is defined as the Price * the Yield Curve
- the Yield Curve expresses the ratio between what you pay for (impressions) and what you charge for (e.g. impressions, clicks, conversions, achievement levels, etc). In the industry some of these have standard names like CPM (impression), CPC (click), CPA (conversion) and others do not.
- CPM yield curve is 1. Typically we expect a yield curve to decline in order of magnitude the farther the revenue event is removed from the cost event. So by example
- the predicted RPM (pRPM) is equal to the Yield-Curve multiplied by the contractual payment unit price).
- pRPM the predicted RPM
- the correction factor is defined as Yield if the most granular cell (across all campaigns)
- the cell must also contain a record of how it was calculated. This is used to pass-back in serving logs and enters the DWH. This allows us to compare not only the discrepancy between predicted revenue and actual revenue, but the reasons behind the discrepancy (for example, are the correction factors not accurate enough, etc).
- the data in each cell is either something you choose to use or not. We call this the significance test.
- the significance test is written as a mathematical rule. It can be as simple as
- each available impression is categorized according to the dimensions in the cube.
- Campaigns are narrowed down to the list of eligible campaigns.
- Eligible campaigns are passed first (a) the prediction cube to get pRPM then (b) through the secondary rule engine to determine which campaign to select and finally through (c) the learning engine to see if there are additional campaigns that are eligible to serve because they are in learning mode.
- the secondary rule-engine assigns weights (percent probabilities) to campaigns based on the pRPM and other data available in the cube. For example if one campaign has a pRPM of $1.00 and another of $0.99 the secondary rule engine may decide to split traffic 60/40 as the predictions are close. Likewise, if one is at $1.00 and the next one is at $0.10 the traffic split may be 100/0. Further, the rule engine must consider not only the pRPM but how confident the pRPM prediction is. Let us say the # 1 campaign has a pRPM of $10.00 and the runner up only $1.00. However, the runner up was calculated with high certainty (no dropped dimensions 24 hrs) and the winner was predicted with low certainty (14d lots of dropped dimensions and large correction factors). Then it may choose the serve the winner at only 25% until more data is gathered.
- the learning engine has a separate model for subsidizing campaigns currently in learning mode.
- the learning engine may not need to be involved, if the given cell already contains "data we believe" for this campaign (i. e. is already learned). If it is not already learned, than it must check that (a) the campaign is enrolled in the learning engine and (b) that it has not exceeded the cost/time/risk criteria allocated to it for learning and (c) that it is based on a basket of model campaigns whose pRPM value is sufficient to participate in the secondary rule engine and (d) that the probability of showing a learning campaign is limited by a user defined governor.
- the learning engine assigns each campaign a model based on a basket of other campaigns. Using the model a Learning pRPM can be calculated. For example:
- the learning engine also assigns learning limits. For example, the opportunity cost of this campaign may not exceed $200.
- the opportunity cost is defined as the difference between the actual revenue earned in serving this campaign subtracted from the revenue we would have earned serving the non-learning-assisted winning campaign. As this number can be less than zero, it is set to zero if the revenue for this campaign exceeds that of the non-learning-assisted winning campaign.
- the rule-set sets limits or a governor on the frequency with which learning-assisted campaigns may win the auction. For example, a rule may be set to say that no leaning-assisted campaign can win more than 25% of the time.
- Each campaign placement (or another optimizer placement) has an entry in the cube for each cell (where the cell is defined based on the data above).
- the random variable Z is dependent on the parameter mu to be estimated, but with a standard normal distribution independent of the parameter mu
- the number z follows from the cumulative distribution function:
- Each campaign placement (or another optimizer placement) has an entry in the cube for each cell (where the cell is defined based on the data above).
- the job of the optimizer is to pick the placement predicted to perform the best, within a given range and confidence interval. It does this through a combination of ranking and statistical analysis to arrive at an answer for each serving decision.
- Y [Yi, Y 2 , Y 3 .. . YN-I, YN], where Yi is the RPM in the first hour, Y 2 is the RPM in the second hour, etc.
- D_avg is the mean of the vector D
- Uo is the null hypothesis
- N is the sample size
- the resulting Z-statistic is then be referenced by a Z table to give a probability.
- This probability indicates the confidence level at which the two campaigns' yields will differ.
- This probability is fed into the weights engine of the optimizer, where it uses this probability as the basis for the making serving decisions.
- the weights engine uses a predefined set of thresholds to set serving weights of campaigns based on their probabilities of the yields differing.
- the actual weights and thresholds are user defined and can vary. If the probability that the yields differ is high, then the higher yielding campaign will be shown with greater frequency over the lower yielding campaign. If the probability that the campaigns' yields vary is much lower, then the two campaigns will be shown a relatively equal percentage of the time instead, representing the uncertainty over which campaign is actually the higher yielding placement.
- the Universal Placement Server is designed to (a) segment traffic and (b) render content across a (c) series of events. This involves the decisions by both the rule engine and the visitor. Decisions by the rule-engine are called “placements”. Decisions by the visitor or any other 3rd party are called “events”.
- segment traffic e.g. slot, rotation, campaign, ad rotation
- Placements that render content e.g. ad, ad asset, landing page, etc.
- the objective of the Universal Placement Server is to maximize the number (monetary value) of late-stage events (e.g. conversions, purchases) as a fraction of the number (cost) of up front events (e.g. bid opportunities, ad impressions).
- First step is to enumerate all of the possible placements types.
- a placement instance e.g. slot 1245.
- traffic rules that send/split traffic from one placement to the next (e.g. from slot to campaign-rotation). These rules are either (a) declarative (e.g. if country is US then go to rotationl else rotation 2) or (b) randomized weights (e.g. send 30% to rotationl , 40% to rotation2 and 30% to rotation3). These types of rules can be combined.
- the data is visuallized in a pivot table paradigm. This paradigm is seen as having two axis.
- the "X-Axis” is comprised of metrics.
- They "Y-Axis” is comprised of dimensions. Metrics are always represented as counts, dollars or calcualtions based on counts/dollars. The dimensions are always arrays of (typically string) scalar values.
- the log is likewise divided into two types of data elements. It is contrised of
- Transaction-Source information which is always mapped to individual dimensions (i.e. each field type in the transaction-source is a dimension and each unique value of that field is a unique row for that dimension).
- each field type in the transaction-source is a dimension and each unique value of that field is a unique row for that dimension).
- the dimensions answer the question "what is the audience" about whom the analysis is being done.
- Event- Timeline which is always mapped to individual metrics (i.e. the count of each event type appearing on the timeline is a value for the metric, as is the sum of the dollars attached to the event).
- the metrics answer the question "how many people did this, and how much money did they make/cost us"
- a single transaction is comprised of one (and only one) transaction-source record and 1 -to-N events records.
- Each event is recoded a little farther in the timeline than the previous event (i.e. the
- Event is defined as follows
- Each event is comprised of both required and optional data.
- the required data for the event is
- Duration May be null or unknown. If known measures the time to load for this particular event (e.g. ad-load-time or page-load-time). In the D WH the duration value is actually translated to a dimension (which is the one exception to the mapping
- the optional data for the event is entirely composed of financial data. All of the fields are extremely unlikely to appear on a single event. We will discuss an example of this later.
- the optional data fields are (starting from the order in which they need to be calculated).
- the fields marked with an * are asserted facts (and are not subject to calculation)
- dCPA dynamic CPx
- this field must be validated before it can be used as revenue (we need to have rules specifying min and max values) to guard against the advertiser accidentally (or on purpose) breaking our serving decisions (e.g. by reporting $0 or by reporting $10000 per event).
- Underwritten-Revenue This field is calculated from a yield-cube lookup. It substitutes a revenue forecast on an earlier event for actual revenue which can be known only on a later event (e.g. revenue from returning user, underwritten- revenue on a conversion). By definition, it makes no sense to have Revenue and Underwritten-Revenue on the same event.
- Cost-Basis This field has two separate calculations. If necessary, the first calculation is the same as underwritten- revenue for this event (may be absent, and then revenue is used as an input). The second is the application of a discount to the revenue number in order to take additional margin on the advertiser side.
- the margin discount is calculated by the "Advertiser Margin Management" demon and therefore needs to be reported on separately from cost
- the function of the source record for the transaction is to map onto reportable dimensions.
- the source record is created with each new transaction. As data is recorded into the source record, it is not modifiable. However, the source record can grow over time, and new data can be appended to it. It is the job of ETL to summarize (union) all of the appends made to the source record to create a single master source record for the transaction.
- a simple (oversimplified) example below illustrates how the source record can grow over the event-timeline, as new facts become known
- Subid-City Seattle Subid-ZIP: 91870 Subid-Customer: 123
- Figure 1 illustrates a network environment 100 from which the techniques described may be controlled.
- the network environment 100 has a network 102 that connects S servers 104-1 through 104-S, and C clients 108-1 through 108-C. More details are described below.
- Figure 2 is a block diagram of a computer system 200 which some embodiments of the invention may employ parts of and which may be representative of use in any of the clients and/or servers shown in Figure 1, as well as, devices, clients, and servers in other Figures. More details are described below.
- FIG. 1 illustrates a network environment 100 in which the techniques described may be controlled.
- the network environment 100 has a network 102 that connects S servers 104-1 through 104-S, and C clients 108-1 through 108-C.
- S servers 104-1 through 104-S and C clients 108-1 through 108-C are connected to each other via a network 102, which may be, for example, a corporate based network.
- the network 102 might be or include one or more of: the Internet, a Local Area Network (LAN), Wide Area Network (WAN), satellite link, fiber network, cable network, or a combination of these and/or others.
- LAN Local Area Network
- WAN Wide Area Network
- satellite link fiber network
- cable network or a combination of these and/or others.
- the servers may represent, for example, disk storage systems alone or storage and computing resources. Likewise, the clients may have computing, storage, and viewing capabilities.
- the method and apparatus described herein may be controlled by essentially any type of communicating means or device whether local or remote, such as a LAN, a WAN, a system bus, etc.
- a network connection which communicates via for example wireless may control an embodiment of the invention having a wireless communications device.
- the invention may find application at both the S servers 104-1 through 104-S, and C clients 108-1 through 108-C.
- Figure 2 illustrates a computer system 200 in block diagram form, which may be representative of any of the clients and/or servers shown in Figure 1.
- the block diagram is a high level conceptual representation and may be
- Bus system 202 interconnects a Central Processing Unit (CPU) 204, Read Only Memory (ROM) 206, Random Access Memory (RAM) 208, storage 210, display 220, audio 222, keyboard 224, pointer 226, miscellaneous input/output (I/O) devices 228 having a link 229, and communications 230 having a port 232 .
- the bus system 202 may be for example, one or more of such buses as a system bus, Peripheral Component Interconnect (PCI), Advanced Graphics Port (AGP), Small Computer System Interface (SCSI), Institute of Electrical and Electronics Engineers (IEEE) standard number 1394 (Fire Wire), Universal Serial Bus (USB), etc.
- PCI Peripheral Component Interconnect
- AGP Advanced Graphics Port
- SCSI Small Computer System Interface
- IEEE Institute of Electrical and Electronics Engineers
- USB Universal Serial Bus
- the CPU 204 may be a single, multiple, or even a distributed computing resource.
- Storage 210 may be Compact Disc (CD), Digital Versatile Disk (DVD), hard disks (HD), optical disks, tape, flash, memory sticks, video recorders, etc.
- Display 220 might be, for example, a liquid crystal display (LCD).
- LCD liquid crystal display
- An apparatus for performing the operations herein can implement the present invention.
- This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer, selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, hard disks, optical disks, compact disk- read only memories (CD-ROMs), and magnetic- optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROM)s, electrically erasable programmable read-only memories (EEPROMs), FLASH memories, magnetic or optical cards, etc., or any type of media suitable for storing electronic instructions either local to the computer or remote to the computer.
- ROMs read-only memories
- RAMs random access memories
- EPROM electrically programmable read-only memories
- EEPROMs electrically erasable programmable read-only memories
- FLASH memories
- the methods of the invention may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems.
- the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
- Such expressions are merely a shorthand way of saying that execution of the software by a computer causes the processor of the computer to perform a useful action or produce a useful result.
- Such useful actions/results may be presented to a user in various ways, for example, on a display, producing an audible tone, mechanical movement of a surface, etc.
- a machine-readable medium is understood to include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
- a machine-readable medium includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals which upon reception causes movement in matter (e.g. electrons, atoms, etc.) (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
- one embodiment or “an embodiment” or similar phrases means that the feature(s) being described are included in at least one embodiment of the invention. References to “one embodiment” in this description do not necessarily refer to the same embodiment; however, neither are such embodiments mutually exclusive. Nor does “one embodiment” imply that there is but a single embodiment of the invention. For example, a feature, structure, act, etc. described in “one embodiment” may also be included in other embodiments. Thus, the invention may include a variety of combinations and/or integrations of the embodiments described herein.
Landscapes
- Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Finance (AREA)
- Economics (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- General Factory Administration (AREA)
Abstract
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US32618510P | 2010-04-20 | 2010-04-20 | |
US13/089,241 US20110258037A1 (en) | 2010-04-20 | 2011-04-18 | Method and Apparatus for Campaign and Inventory Optimization |
PCT/US2011/033004 WO2011133519A2 (fr) | 2010-04-20 | 2011-04-19 | Procédé et appareil d'optimisation de campagne et d'inventaire |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2561473A2 true EP2561473A2 (fr) | 2013-02-27 |
EP2561473A4 EP2561473A4 (fr) | 2014-12-31 |
Family
ID=44834758
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP11772532.5A Withdrawn EP2561473A4 (fr) | 2010-04-20 | 2011-04-19 | Procédé et appareil d'optimisation de campagne et d'inventaire |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2561473A4 (fr) |
WO (1) | WO2011133519A2 (fr) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7007020B1 (en) * | 2000-03-10 | 2006-02-28 | Hewlett-Packard Development Company, L.P. | Distributed OLAP-based association rule generation method and system |
WO2006127859A2 (fr) * | 2005-05-25 | 2006-11-30 | Experian Marketing Solutions, Inc. | Architecture de base de donnees repartie et interactive pour le traitement de donnees parallele et asynchrone de donnees complexe et le traitement de requetes en temps reel |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090254412A1 (en) * | 2008-04-07 | 2009-10-08 | Edward Braswell | Methods and systems using targeted advertising |
US20100057534A1 (en) * | 2008-08-27 | 2010-03-04 | Smart Channel, L.L.C. | Advertising-buying optimization method, system, and apparatus |
US20100088177A1 (en) * | 2008-10-02 | 2010-04-08 | Turn Inc. | Segment optimization for targeted advertising |
US20100088152A1 (en) * | 2008-10-02 | 2010-04-08 | Dominic Bennett | Predicting user response to advertisements |
-
2011
- 2011-04-19 WO PCT/US2011/033004 patent/WO2011133519A2/fr active Application Filing
- 2011-04-19 EP EP11772532.5A patent/EP2561473A4/fr not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7007020B1 (en) * | 2000-03-10 | 2006-02-28 | Hewlett-Packard Development Company, L.P. | Distributed OLAP-based association rule generation method and system |
WO2006127859A2 (fr) * | 2005-05-25 | 2006-11-30 | Experian Marketing Solutions, Inc. | Architecture de base de donnees repartie et interactive pour le traitement de donnees parallele et asynchrone de donnees complexe et le traitement de requetes en temps reel |
Non-Patent Citations (1)
Title |
---|
See also references of WO2011133519A2 * |
Also Published As
Publication number | Publication date |
---|---|
WO2011133519A2 (fr) | 2011-10-27 |
EP2561473A4 (fr) | 2014-12-31 |
WO2011133519A3 (fr) | 2012-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20110258037A1 (en) | Method and Apparatus for Campaign and Inventory Optimization | |
US20240185304A1 (en) | Methods, systems, and devices for counterfactual-based incrementality measurement in digital ad-bidding platform | |
US11093977B2 (en) | Ad ranking system and method utilizing bids and adjustment factors based on the causal contribution of advertisements on outcomes | |
Lewis et al. | Measuring the Effects of Advertising | |
US20120158456A1 (en) | Forecasting Ad Traffic Based on Business Metrics in Performance-based Display Advertising | |
US20110196733A1 (en) | Optimizing Advertisement Selection in Contextual Advertising Systems | |
US20190073699A1 (en) | Matching visitors as leads to lead buyers | |
US20190347675A1 (en) | System and method for user cohort value prediction | |
US20150032507A1 (en) | Automated targeting of information to an application visitor based on merchant business rules and analytics of benefits gained from automated targeting of information to the application visitor | |
CN111667311B (zh) | 一种广告投放的方法、相关装置、设备以及存储介质 | |
US20120284128A1 (en) | Order-independent approximation for order-dependent logic in display advertising | |
US20150066644A1 (en) | Automated targeting of information to an application user based on retargeting and utilizing email marketing | |
US20140122253A1 (en) | Systems and methods for implementing bid adjustments in an online advertisement exchange | |
US11562298B1 (en) | Predictive analytics using first-party data of long-term conversion entities | |
US20110251889A1 (en) | Inventory clustering | |
US11144968B2 (en) | Systems and methods for controlling online advertising campaigns | |
WO2011133507A2 (fr) | Procédé et appareil destinés à l'optimisation créative | |
US10891640B2 (en) | Adaptive representation of a price/volume relationship | |
US20150032540A1 (en) | Automated targeting of information influenced by delivery to a user | |
WO2011133519A2 (fr) | Procédé et appareil d'optimisation de campagne et d'inventaire | |
WO2011133535A2 (fr) | Procédé et appareil d'optimisation de produit et de post-conversion | |
US20150032532A1 (en) | Automated targeting of information influenced by geo-location to an application user using a mobile device | |
WO2011133563A2 (fr) | Procédé et appareil pour serveur de placement universel | |
EP2561475A2 (fr) | Procédé et appareil pour optimisation de page de renvoi | |
KR102602291B1 (ko) | 광고 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20121119 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20141128 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G06Q 30/00 20120101AFI20141124BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20161124 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20170428 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |