AU2016310418A1 - A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates - Google Patents

A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates Download PDF

Info

Publication number
AU2016310418A1
AU2016310418A1 AU2016310418A AU2016310418A AU2016310418A1 AU 2016310418 A1 AU2016310418 A1 AU 2016310418A1 AU 2016310418 A AU2016310418 A AU 2016310418A AU 2016310418 A AU2016310418 A AU 2016310418A AU 2016310418 A1 AU2016310418 A1 AU 2016310418A1
Authority
AU
Australia
Prior art keywords
video
advert
data
meta data
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
AU2016310418A
Inventor
Jamie MARTEL
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2015903461A external-priority patent/AU2015903461A0/en
Application filed by Individual filed Critical Individual
Publication of AU2016310418A1 publication Critical patent/AU2016310418A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0276Advertisement creation

Abstract

There is provided a method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates, the method comprising retrieving video advert data of a video advert from a video advert database; performing automated content analysis of the video advert data; automating Q&A meta data generation in accordance with the automated content analysis, the Q&A meta data comprising question meta data and associated answer meta data; associating the Q&A meta data with the video advert data; serving, to a client computing device, the video advert video advert and wherein the client computing device displays question data derived from the Q&A meta data and wherein the client computing device is configured for receiving user answer data from a user; and comparing the user answer data to the answer meta data.

Description

PCT/AU2016/050805 wo 2017/031554 A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates
Field of the Invention [1] The present invention relates to web advertising and in particular, but not necessarily entirely, to a web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates.
Background of the Invention [2] Turning to figure 3A, there is shown the rendering of a video advert in accordance with the prior art, such as that which is employed by the YouTube video streaming platform, for example. Specifically, the YouTube video streaming platform is adapted to render a video advert prior to rendering a requested video.
[3] So as to attempt to increase video advert viewing durations, the rendering comprises an indication 14 that the user may skip the advert within a predetermined amount of time. For example, when first viewing and advert, the user may be forced to view the advert for at least 5 seconds prior to being able to skip the remainder of the rendering advert. For subsequent video request, the user may be required to watch a video advert for longer periods, such as for 15 seconds. Yet further, for further subsequent video request, the user may not be able to skip ads at all.
[4] However, such an arrangement is deficient in several respects. For example, forcing a user to view an advert, even with the ability to skip the remainder of the advert, is little different from forcing the user to watch the entire advert. Users resent being forced to watch adverts and are therefore non-receptive to the marketing message of the video advert.
[5] Furthermore, most users utilise the skip functionality such that the remainder of the video advert goes largely unwatched. As such, video advert is displayed in this manner have very low video advert viewing completion rates.
[6] US 20070106557 A1 (hereafter "Dl") discloses methods and systems for advertising are disclosed wherein an advertisement is selectively broadcast or otherwise distributed with a compensation tag indicating that a user may receive compensation for his or her attention to the advertisements. If the user responds to the compensation tag, his or her attention to the advertisement is verified and he or she is compensated. Depending on the embodiment, the advertisement may be broadcast or distributed using television, interactive television, billboards, radio, print, cellular telephone, other mobile devices, the World Wide Web, or another medium. PCT/AU2016/050805 wo 2017/031554 [7] US 20100162289 A1 (hereafter "D2") discloses a method for incentivizing a viewer to view advertisements presented during programming delivered over a content delivery system. The method includes receiving over a content delivery system a program and one or more advertisements associated therewith. At least a portion of the program and at least one of the advertisements associated therewith is rendered. A first video segment is received over the content delivery system. The first video segment prompts the viewer to provide user input indicating that the viewer has viewed at least one of the advertisements. The prompt is presented to the viewer. The user input is received in response to presentation of the prompt. The viewer is rewarded after at least one predetermined criterion is met. The predetermined criterion includes a determination that the user input is a proper response to the prompt.
[8] The present invention seeks to provide a system and method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates will overcome or substantially ameliorate at least some of the deficiencies of the prior art, or to at least provide an alternative.
[9] It is to be understood that, if any prior art information is referred to herein, such reference does not constitute an admission that the information forms part of the common general knowledge in the art, in Australia or any other country.
Summary of the Disclosure [10] In accordance with one aspect, there is provided method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates, the method comprising: retrieving video advert data of a video advert from a video advert database; performing automated content analysis of the video advert data; automating Q&A meta data generation in accordance with the automated content analysis, the Q&A meta data comprising question meta data and associated answer data; associating the Q&A meta data with the video advert data; serving, to a client computing device, the video advert video advert and wherein the client computing device displays question data derived from the Q&A meta data and wherein the client computing device is configured for receiving user answer data from a user; and comparing the user answer data to the answer meta data.
[11] The content analysis may comprise analysis of speech to text data derived from the video advert data.
[12] The content analysis may comprise image recognition analysis of the video advert data.
[13] The image recognition analysis may comprise text recognition configured for identifying text displayed within the video advert. PCT/AU2016/050805 wo 2017/031554 [14] The image recognition analysis may comprise object recognition configured for identifying an object displayed within the video advert.
[15] Object recognition may comprise reference image correlation.
[16] Object recognition may comprise shape detection.
[17] Object recognition may comprise colour detection.
[18] The image recognition analysis may comprise facial recognition.
[19] The image recognition analysis may be configured for recognising a number of faces within the video advert.
[20] The image recognition analysis may be configured for determining a timestamp at which a face may be detected.
[21] The image recognition analysis may comprise person recognition analysis.
[22] The person recognition analysis may be configured for performing a correlation analysis of a face profile database.
[23] The person recognition analysis may be configured for determining a timestamp at which a person may be identified.
[24] The content analysis further may comprise speech to text conversion and wherein the content analysis may be configured for associating converted speech to text with an identified person.
[25] Content analysis may comprise matching the video advert data content against matching data.
[26] The matching data may comprise a at least one keyword and wherein the content analysis may comprise identifying content of the video data matching the at least one keyword.
[27] The question meta data generated by the Q&A meta data generation may comprise author provided Q&A meta data.
[28] The question meta data generated by the Q&A meta data generation may comprise author provided Q&A meta data.
[29] The question meta data generated by the Q&A meta data generation may comprise a video play out timestamp and wherein comparing the answer data to the answer meta data may comprise comparing a timestamp at which the answer data may be received to the video playout timestamp.
[30] The question meta data generated by the Q&A meta data generation may comprise a relative position within a rendering frame of the video data and wherein comparing the answer data to the answer meta data may comprise comparing a user provided relative position with the relative position of the Q&A meta data.
[31] The relative position of the Q&A meta data may be generated utilising object recognition.
[32] The Q&A meta data generation may comprise application of a rule from a set of rules.
[33] The set of rules comprise hierarchical rules. PCT/AU2016/050805 wo 2017/031554 [34] The Q&A meta data generation may comprise source content analysis.
[35] The source content analysis may comprise analysis of web content.
[36] Other aspects of the invention are also disclosed.
Brief Description of the Drawings [37] Notwithstanding any other forms which may fall within the scope of the present invention, a preferred embodiments of the disclosure will now be described, by way of example only, with reference to the accompanying drawings in which: [38] Figure 1 shows a system for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates in accordance with an embodiment of the present disclosure; [39] Figure 2 shows the metadata generator of the system of Figure 1 in further detail wherein the system is configured for automated content analysis and Q&A metadata generation and embodiments of the present disclosure; [40] Figure 3 shows various exemplary graphical user interfaces in accordance with the prior art and embodiments of the present disclosure.
Description of Embodiments [41] For the purposes of promoting an understanding of the principles in accordance with the disclosure, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Any alterations and further modifications of the inventive features illustrated herein, and any additional applications of the principles of the disclosure as illustrated herein, which would normally occur to one skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the disclosure.
[42] Before the structures, systems and associated methods relating to the web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates are disclosed and described, it is to be understood that this disclosure is not limited to the particular configurations, process steps, and materials disclosed herein as such may vary somewhat. It is also to be understood that the terminology employed herein is used for the purpose of describing particular embodiments only and is not intended to be limiting since the scope of the disclosure will be limited only by the claims and equivalents thereof.
[43] In describing and claiming the subject matter of the disclosure, the following terminology will be used in accordance with the definitions set out below. PCT/AU2016/050805 wo 2017/031554 [44] It must be noted that, as used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
[45] As used herein, the terms "comprising," "including," "containing," "characterised by," and grammatical equivalents thereof are inclusive or open-ended terms that do not exclude additional, unrecited elements or method steps.
[46] It should be noted in the following description that like or the same reference numerals in different embodiments denote the same or similar features.
System for interactive advert authoring, serving and user interaction monitoring for increasing advert viewing completion rates [47] Turning now to figure 1, there is shown a system 1 for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates.
[48] As will be described in further detail below, the system 1 is adapted for increasing video advert viewing completion rates and user advert engagement and interaction to enhance brand recollection and messaging.
[49] The system 1 comprises a client computing device 10 such as a personal computing device, mobile communication device and the like. Client computing device 10 is in operable communication with an interactive advert server 8 across a data network, such as the Internet.
[50] In a preferred embodiment, the computer architecture provided in figure 1 has a web server architecture such that the client computing device 10 is adapted to request resources from the interactive advert server 8 utilising web protocols. In this manner, the interactive advert server 8 may comprise a web server such as the Apache Webserver in operable communication with a hypertext preprocessor such as the PHP hypertext preprocessor adapted for responding to web request with dynamically generated webpages.
[51] As such, the client computing device 10 may comprise a browser application 11 so as to provide a user interface to the user for the purposes of requesting and rendering web requests.
[52] However, it should be noted that the embodiments described herein need not necessarily be limited to this particular computer architecture. For example, the functionality described herein may be provided by way of a customised downloadable software application being executed by the client computing device 10. In this embodiment, the client computing device 10 need not necessarily utilise a web browser application 11 for the purposes of interacting with the advert server 8.
[53] As will be described in further detail below, the interactive advert server 8 is adapted for serving interactive video adverts to the client computing device 10 wherein the interactive video adverts are adapted for increasing video advert viewing completion rates. Specifically, as will become apparent from the below description, the interactive video adverts are adapted to display questions PCT/AU2016/050805 wo 2017/031554 relating to the content of the video advert upon completion of the rendering of the interactive video advert which, if answered correctly, allow for the rewarding of users, such as by way of remuneration. In this manner, clients are actively encouraged to willingly participate in the viewing adverts resulting in greater user interaction, viewing completion rates and the like.
Advert video data base 5 [54] In one embodiment, the interactive advert server 8 is in operable communication with an advert video data base 5.
[55] It should be noted that the advert video database 5 may be an externally located and provided third-party advert video data base 5 or, in other embodiments, the advert video database may be managed by the interactive advert server 8 such that advertisers are able to upload video adverts directly to the interactive advert server 8.
[56] However, where the advert video data is 5 is an externally provided and managed advert video database 5, the system 1 may advantageously take advantage of a large number of existing advert videos which are augmented with question and answer metadata for the purposes of generating interactive as will be described in further detail below.
Interactive advert authoring [57] For the purposes of authoring the interactive video adverts, the system 1 comprises a Q&A metadata database 7. The Q&A metadata database 7 is in operable communication with the interactive advert server 8 such that the interactive advert server 8 is able to generate the interactive video adverts in accordance with video data from the advert video database 5 and the Q&A metadata from the Q&A metadata database 7.
[58] Now, for the purposes of authoring the interactive video adverts, the system 1 may comprise an interactive advert authoring client computing device 2. The client computing device 2 is adapted for selecting a video advert from the advert video database 5 and creating Q&A metadata for the selected video advert for storage within the Q&A metadata database 7.
[59] In a further embodiment as will be described in further detail below, as opposed to utilising human intervention for the purposes of creating the Q&A metadata, the system 1 may comprise a metadata generator 4, being adapted for automating the generation of the Q&A metadata in accordance with automated content analysis, artificial intelligence, machine learning and the like. In this manner, utilising the metadata generator 4, the system 1 may generate Q&A metadata from a large number of existing video adverts. PCT/AU2016/050805 wo 2017/031554
User management [60] In embodiments, the system 1 may comprise a user account database 9 in operable communication with the interactive advert server 8 for the purposes of allowing the interactive advert server 8 to allocate credit to user accounts in accordance with advert user interactions.
Payouts [61] In a further embodiment, the system 1 may comprise payout management 3 for the purposes of managing payouts of user credit amounts such as by initiating electronic funds transfers.
Client side metadata manager [62] In embodiments, the interactive advert data provided by the interactive advert server 8 to the client computing device 8 may comprise client side scripts such as Adobe Flash, JavaScript, HTML 5 and the like (referred to herein as the client side metadata manager 13) for the purposes of rendering a Q&A interface upon completion of the rendering of a video advert, the Q&A interface adapted for posing the question specified by the Q&A metadata.
[63] Furthermore, the browser 13 may comprise a video renderer 12 for the purposes of rendering the video advert.
[64] As alluded to above, the client computing device 10 need not necessarily utilise a browser 12 in all applications and may, in other embodiments, utilise a customised software application executed by the client computing device 10.
[65] It should be noted that the embodiment provided in figure 1 is exemplary only and that variations may be made thereto within the purposive scope of the embodiments described herein.
[66] Specifically, whereas certain computing integers are shown as being separate, certain features and functionality may be combined, such as by being executed by a single computing device. For example, the interactive advert server 8 may comprise the Q&A metadata database 7, the video advert database 5, the user account database 9, the metadata generator and the like.
Exemplary embodiments [67] Having described the above exemplary technical architecture, there will now be provided, by way of illustration, various exemplary embodiments. It should be noted that these embodiments are exemplary only and that no technical limitation should necessarily be imputed to the other embodiments described herein accordingly.
[68] As will become apparent from the ensuing description, these exemplary embodiments will describe the web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates. PCT/AU2016/050805 wo 2017/031554
Exemplary embodiment - authoring [69] The exemplary embodiment begins with the authoring of interactive video adverts wherein existing video adverts are augmented with Q&A metadata for the purposes of subsequently generating interactive video adverts.
[70] As alluded to above, the system 1 may comprise an existing advert video database 5 comprising video adverts which have already been created. As such, during the authoring process, a particular existing video advert may be selected for authoring.
[71] For example, the car manufacturer Volvo may wish to increase video viewing completion rates of their web-based video adverts for the new three series Volvo.
[72] As such, an author working on behalf Volvo may select an existing video advert for the Volvo three series from the advert video database 5. In embodiments, the author may first be required to create an account with the interactive advert server 8 for the purposes of authoring the adverts. Once done, the interactive advert server 8 may provide an interface displaying available adverts for authoring, such as by allowing the author to input keywords and the like so as to locate the appropriate advert for authoring.
[73] Once having selected the appropriate video advert for the Volvo three series, the author is then able to generate the Q&A metadata during the authoring process.
[74] Specifically, upon selection of an appropriate advert, and authoring interface may be displayed by the interactive advert authoring client computing device 22 allowing the author to view the video advert.
[75] For example, the author may select a video advert showing a family driving the Volvo three series through the countryside. The video advert may show the husband and wife occupying the front seat of the vehicle and their children and pet dog occupying the rear seat.
[76] As such, the author may, utilising the authoring interface, input a question such as "how many kids were in the back seat?" Furthermore, in embodiments, the author may input various candidate answers such as "One", 'Two" and so on.
[77] In further embodiments as will be described in further detail below, utilising the authoring client computer device 2, the author may specify matching data, such as matching data comprising a plurality of keywords, such that, in embodiments where the metadata generator 4 performs automated content analysis, the metadata generator 4 may match content recognise from the automated content analysis against the matching data. In this way, for example, an author may specify that all videos relating to the Volvo vehicle should be authored by the metadata generator 4. It should be noted herein that the system 1 may work across all video data types, including those which are made for advertising. Specifically, video adverts may be authored specifically, or existing videos, such 8 PCT/AU2016/050805 wo 2017/031554 as which may already exist within the YouTube video database, for example, may be authored automatically, including those which match the above-described matching data.
Exemplary embodiment - reward amount [78] In embodiments, and author may specify a reward amount, being an amount of credits to allocate to users who correctly answer the question. For example, the author may locate an amount of $2 for every correct answer. As such, if the user answers the question correctly as will be described in further detail below, the user's user account credit balance may be incremented by $2. In embodiments however, the system 1 may be adapted to take a commission such that, for example, should the advertiser specify a reward amount of $2, the system may be adapted to take $.10 as a commission.
[79] Having authored the advert in this manner, the Q&A metadata is stored within the database 7. Specifically, metadata representing the question, the correct answer and the plurality of candidate answers may be stored within the Q&A metadata database 7. Additionally, further information may be stored such as the reward amount, display network, display schedule, answer hints and the like.
Exemplary embodiment - Automated authoring [80] Tending to figure 2, there are shown the metadata generator 4 in further detail in accordance with an embodiment of the present disclosure wherein the system 1 is configured for automated authoring of the video content.
[81] Specifically, as will be described in further detail below, the metadata generator 4, in this embodiment, is configured for analysing the content of the video data retrieved from the advert video database 5 so as to be able to automate the authoring of the Q&A meta data.
[82] As such, as can be seen, the metadata generator 4 comprises a crawler 29 configured to retrieve advert video data from the advert video database 5. During the retrieval thereof, the video data may pass through a speech to text converter 30 to convert any speech within the advert video data to text. As alluded to above, such video data may comprise advert video data created as advertising material and ordinary non-advertising video data.
[83] Thereafter, the metadata generator 4 may comprise a content analysis module 23 configured to analyse the content of the video data.
[84] As will be described in further detail below, the content analysis module 23 is used for analysing the content of the video data such that Q&A meta data may be generated in an automated manner by the metadata generator 4.
[85] Now, in embodiment, the content analysis module 23 may comprise a matching data matching module 24 configured for matching the content analysed from the advert video data with a PCT/AU2016/050805 wo 2017/031554 plurality of matching data 31 received from the authoring client computing device 2. Specifically, in this embodiment, authors, utilising the authorising client computing device 2 may provide a plurality of matching data. For example, a person wishing to promote video content for the Volvo car manufacturer may provide text matching data comprising the keywords "Volvo" and "XC90" (being a Volvo model).
[86] As such, during the content analysis of the video data, the content analysis module 23 may be configured to identify analysed video content matching these matching data. Such matching data matching be implemented by the system 1 in a "pay-per-view" model wherein authors are charged for each video which is matched to an author provided matching data for which Q&A meta data is generated and served to a viewer. Alternatively, a "pay-per completion" model may be utilised wherein authors are only charged for users who actually engage in the answering of questions in the manner described herein.
[87] In other words, should the content analysis module 23 identify video content relating to a Volvo vehicle, the content analysis 23 may match the matching data 23 so as to matching data the creation of associated Q&A meta data.
[88] In alternative embodiments, as opposed to providing matching data, the author may specify a subset of video adverts relating to Volvo which are to be authored wherein, in this embodiments, such videos did not therefore be identified utilising a matching data matching technique.
[89] As such, it should be noted that such author provided matching data need not necessarily be employed in all embodiments described herein wherein, in embodiments, the content analysis module 23 may generate Q&A meta data in the absence of a matching matching data.
[90] Now, there are various ways in which content may be analysed. Utilising the above-described example wherein the metadata generator 4 comprises the speech to text module 4, content may be analysed and matched on the basis of the text dialogue record for each video advert as was converted by the speech to text generator 30. It should be noted that in embodiments, such speech to text generation may have been performed elsewhere such that the advert video database 5 already comprises the text meta data representing the text format of the speech from the video adverts.
[91] As such, utilising the above-described example, the matching data matching module 24 may recognise the word "Volvo" within the text meta data.
[92] In other embodiments, other content recognition techniques may be utilised including wherein the content analysis module 23 comprises an image recognition module 25. In this regard, the image recognition module 25 is configured for analysing the image content of the video adverts.
[93] In a first embodiment, the image recognition module 25 may comprise text recognition. For example, a particular video advert may comprise a rear view of a vehicle wherein the word "Volvo" is 10 PCT/AU2016/050805 wo 2017/031554 displayed on the rear of the vehicle. As such, the text recognition module 26 may be configured for identifying such text within the video.
[94] In embodiments, as opposed to being limited to recognising text only, the text recognition module 26 may be configured for recognising trademark shapes, such as the Volvo insignia for example.
[95] In further embodiments, the image recognition module 25 may comprise object recognition 27. In this embodiment, the object recognition module 27 may be configured for recognising various objects within a video, such as a Coke can. In this manner, Coca-Cola may set a number of matching data 31 configured to be matched by the content analysis module 23 for any advert video data within which a Coke can is recognised. Such a recognition 27 will utilise a plurality of object recognition techniques. In a first embodiment, the object recognition module 27 may utilise correlation for object comparison wherein a plurality of reference objects are correlated against the video data to identify a correlation. For example, a reference image of a red Coca-Cola can may be provided such that the object recognition module 27 may perform a correlation thereof utilising the image data from the advert video data.
[96] In further embodiments, the object recognition module 27 may perform object recognition in other manners including by way of shape and/or colour analysis and the like. In this regard, the object recognition module 27 may be configured for identifying the characteristic profile of a Coke bottle or Coke can and including the colour thereof.
[97] In further embodiments, the image recognition module 25 may comprise a facial recognition module 28 configured for identifying human faces within the advert video data. In this regard, by way of example, the quickly generated Q&A meta data may pose the question "how many people were shown in the advert?" Is regard, the facial recognition module 28 may count the number of people/faces appearing in a video advert so as to be able to pose such a question.
[98] In further embodiments, the image recognition module 25 may comprise a person recognition module 39 which may identify various people within videos. For example, the person recognition module 39 may recognise Jane Fonda. In this regard, the subsequent generated Q&A meta data may pose the instruction "click the video screen the first time you see Jane Fonda".
[99] Such person recognition 39 may make recourse to a face profile database, such as that which may be provided by a social media platform such that the names of such persons identified may be readily ascertained.
[100] The person recognition module 39 may record the associated playout timestamp at which the relevant person is recognised. 11 PCT/AU2016/050805 wo 2017/031554 [101] Now, having analysed the content utilising the content analysis module 23, data derived from the content analysis module 23 may be fed into a Q&A meta data generator 32. Differing types and forms of data may be generated by the content analysis module 23. For example, where the content analysis module 23 is configured for performing text matching on the speech to text data, the matching keywords may be fed into the Q&A meta data generator 32.
[102] As alluded to above, matching data text matching need not necessarily be utilised in all embodiments. In this regard, the content analysis module 23 may be configured for identifying significant keywords within the speech to text data which may comprise a word distribution frequency analysis wherein less frequently appearing words are of more significance of which such may comprise peoples names, brand names and the like.
[103] Alternatively, when no matching data is employed, the speech to text data may be fed into the Q&A meta data generator 32.
[104] Where the content analysis module 23 is configured for implementing text recognition utilising the text recognition module 26, the text recognised by the text recognition module 26 may be fed into the Q&A meta data generator 32.
[105] Furthermore, when the content analysis module 23 is configured for object recognition utilising the object recognition module 27, object meta data may be fed into the Q&A meta data generator 32 wherein, for example, the name of the identified object such as "Coke can", and other associated metadata, such as the orientation of the Coke and, such as upright, at 45° or the like, the colour of the Coke can, such as red, green, gold or the like and the like may be provided.
[106] Furthermore, wherein the content analysis module 23 implements facial recognition utilising the facial recognition module 28, various facial recognition meta data may be provided, such as the number of faces identified, the time period interval at which each face is identified, the orientation of each face and the like.
[107] Furthermore, wherein the content analysis module 23 is configured for implementing person recognition 39, meta data may be provided to the Q&A meta data generator 32 comprising the name of the person, the time interval at which the person was identified within the video advert and the like.
[108] In embodiments, the content analysis module 23 may utilise a combination of the above techniques wherein, for example, person recognition may be combined with speech to text such, wherein, for example, a person is identified, the speech recognised at that interval may be associated with the person. In this manner, Q&A meta data may be generated wherein, for example, the Q&A meta data generator 32 may pose a question "what did Jane Fonda say?". 12 PCT/AU2016/050805 wo 2017/031554 [109] Turning now specifically to the Q&A meta data generator 32, in embodiment, the Q&A meta data generator 32 may generate the Q&A meta data in different manners.
[110] In one embodiment, the Q&A meta data generator 32 may comprise an author content applicator module 33 configured for applying content provided by an author utilising the authoring client computing device 2. For example, for the author working for Volvo, the author may stipulate that every time a Volvo is identified within the video data, the supplied question and answer text is to be applied. For example, the author may stipulate that for every video identified as relating to a Volvo vehicle, the question "which vehicle was shown in the video?" And comprise a multiple choice answer meta data comprising anyone of Volvo, BMW, Ford and the like.
[111] Wherein the author configures the Q&A meta data to be applied, in embodiments, the author may utilise wild cards such as a model number wildcard. For example, the content analysis module 23 may be identified for identifying any videos relating to any of the Volvo vehicle models such that when applying the Q&A meta data, the question "what Volvo car model was shown in the video?" may be displayed.
[112] In further embodiments, the Q&A meta data generator 2 may be configured for generating the Q&A meta data entirely autonomously. In this regard, it should be noted that a combination of techniques may be utilised wherein, for example, for certain videos matching the author provided matching data 31, the Q&A meta data provided by the author may be applied but, wherein, for example, for videos not having an associated matching data, the Q&A meta data generator 32 may generate the Q&A meta data autonomously.
[113] In embodiments, the Q&A meta data generator 32 may apply a plurality of rules. For example, the rules may stipulate differing types of Q&A meta data be generated for differing types of content analysis recognition. For example, for the object recognition, the rules 40 may be configured to pose the question "What colour was the Coke can?".
[114] Furthermore, for facial recognition, the rules may pose the question "how many people appeared in the video?" Furthermore, for the person recognition, the rules 40 may stipulate generation of Q&A meta data posing the instruction "Click the first time you see Jane Fonda", wherein the associated answer meta data comprises the time interval at which Jane Fonda was recognised by the person recognition module 39.
[115] In embodiments, the rules 40 may be applied in a hierarchical manner.
[116] In further embodiments, the Q&A meta data generator 32 may comprise a Q&A source analyser 35 configured for generating the Q&A meta data in accordance with third-party Q&A source content 36. In embodiments, the Q&A source content 36 may comprise web content 37. For example, for an identified keyword "Volvo", the Q&A source analyser 35 may analyse web content 37 13 PCT/AU2016/050805 wo 2017/031554 comprising such a keyword, such as news articles or the like so as to be able to formulate Q&A meta data accordingly. For example, for a particular piece of web content, the Q&A source analyser 35 may identify a statement such as "The Volvo car is a European car model". As such, the Q&A source analyser 5 may generate Q&A meta data comprising the question "Is the Volvo car a European car model?". In further embodiments, the Q&A source analyser may substitute alternatives, such as by posing the question "is a Volvo car an Asian car model?".
[117] Further statements may be analysed, such as a statement comprising "Henry Ford founded the Ford motor corporation on 16 June 1903". As such, the Q&A source analyser 35 may pose the question "Who founded the Ford motor corporation" and comprise associated answer data comprising multiple-choice answers comprising Henry Ford, George Ford, Harrison Ford and the like. Alternatively, the Q&A source analyser 35 may pose a question "When was the Ford motor corporation founded?".
[118] In further embodiments, the Q&A source content 36 may comprise user provided content 38 wherein other users who previously watched the associated video data may pose questions which may then be subsequently applied.
[119] In further embodiments, the Q&A meta data generator 32 may comprise an artificial intelligence module 34 configured for intelligently generating Q&A meta data. Specifically, the Al module 34 may employ a machine learning technique such as which may be trained on various content, such as web content 37 such that, wherein, for example, the content analysis module 23 identifies, utilising object recognition, a Coke can, the Al module 34 module may pose the question "what you prefer to quench your thirst?".
[120] Once the Q&A meta data has been generated by the Q&A meta data generator 32, such may be fed to the Q&A meta database 7 ready for deployment when serving video adverts in the manner described herein.
Exemplary embodiment - Serving interactive video adverts [121] The serving of interactive video adverts commences with the receipt of a request from the client computing device 10 for a web resource. For example, the user of the client computing device 10 may wish to view a particular webpage, video or the like wherein the system 1 may be adapted to serve the interactive video advert to complement the request resource, such as wherein the interactive video advert is embedded within the web resource, or wherein the interactive video advert is displayed prior to allowing the user to receive the requested web resource.
[122] For example, for the latter, the web user may be presented with the interactive video advert prior to viewing the request webpage or video. 14 PCT/AU2016/050805 wo 2017/031554 [123] It should be noted that in embodiments, as opposed to serving interactive video adverts in the traditional manner wherein adverts are provided in combination with, or in anticipation of serving a web resource, a user may download a customer software application specifically adapted for the purposes of viewing interactive video adverts. In this manner, the software application may be utilised by the user as a game wherein multiple interactive video adverts are provided to the user such that the user is able to attempt to answer the questions correctly so as to win credit, move on to differing gaming levels, and the like.
[124] Upon receipt of the request for a web resource, the interactive advert server 8 may be adapted to identify a user associated with the client computing device 10 in accordance with the request. Identifying a user associated with the client computing device 10 may allow the interactive advert server 8 to subsequently allocate credit won by the user to the correct user account.
[125] Identifying the user may require the user to authenticate with the interactive advert server 8 for the purposes of providing a username and password, for example. Alternatively, the interactive advert server 8 may utilise browser tracking, such as by utilising cookies or the like to identify the user.
[126] It should be noted that, in embodiments the interactive advert server 8 may serve referral requests from other Web servers. For example, should the user request a particular news article from a third party web server, the third-party web server may send a referral request, such as by utilising an iFrame or the like so as to allow the interactive video advert to be embedded within the webpage served by the third party Web server.
Exemplary embodiment - interactive advert video data [127] As such, in response to the request, the interactive advert server 8 is adapted to serve, to the client computing device 10 interactive advert video data representing the interactive advert video.
[128] Specifically, upon receipt of the request, the interactive advert server 8 is adapted to decide which interactive video advert to serve to the user. In one embodiment, the server 8 may implement a pseudorandom algorithm so as to select a random interactive video advert. However, in another embodiment, the server 8 may be adapted to select an appropriate interactive video advert in accordance with user demographics, previous question answering competence and the like as will be described in further detail below.
[129] As such, having identified an appropriate interactive video advert to serve, the interactive advert server 8 may select, from the Q&A metadata database 7, the Q&A metadata wherein the Q&A metadata comprises an ID of an advert from the advert video data base 5. As such, the interactive advert server 8 may combine the advert video from the advert video data base 5 with the Q&A metadata from the Q&A metadata database 7 prior to serving the interactive video advert to the client 15 PCT/AU2016/050805 wo 2017/031554 computing device. In other embodiments, the interactive advert server 8 may be adapted to serve the Q&A metadata alone wherein the browser 11 independently retrieves the video advert data from the advert video data base 5 for client side augmentation with the Q&A metadata.
Exemplary embodiment - interactive Q&A interface [130] The interactive advert video data is configured so as to cause the client computing device 10 to render the video data and, present, upon completion of the rendering of the video data, and interactive Q&A interface comprising the question and the plurality of candidate answers.
[131] Specifically, referring to figure 3B there is shown the display of the interactive Q&A interface 15 upon completion of the rendering of the video data posing the question 16 and the plurality of candidate answers 17. As can be seen, the interface 15 poses the question "How many kids were the back seat?" and provides various candidate answers for the user to select.
Exemplary embodiment - interactive advert video data - client side scripting [132] So as to allow the display of the interface 15 upon completion of the rendering of the video advert, in one embodiment, the interactive advert server 8 may be adapted to serve client side script to the client computing device 10 so as to cause the display of the interface 15 upon completion of the rendering of the video.
[133] In one embodiment such client side script may take the form of JavaScript adapted to monitor the completion of the rendering of the video data such that, upon detection of the completion of the rendering of the video data the client side JavaScript may display a modal overlay comprising the interface 15. In other embodiments, as opposed to displaying the overly, the client side script may cause the browser to redirect to a new web page comprising the interface 15.
[134] In other embodiments, as opposed to utilising a separate client side scripting payload the interactive advert server 8 may augmenting the video data such that the interface 15 is rendered as part of the video data rendering process.
Exemplary embodiment - receiving answer data [135] It should be noted that the provision of candidate answers 17 as shown in figure 3B is exemplary only and, in other embodiments the interface 15 need not necessarily display candidate answers requiring the user to rather input the answer manually.
[136] Now, having been presented with the candidate multiple choice answers within the exemplary interface 15, the user may select the user's answer to the question. For example, the user may select option "two". 16 PCT/AU2016/050805 wo 2017/031554 [137] As such, the interactive advert server 8 may be adapted to receive, from the client computing device 10, answer data representing the user answer. Thereafter, the interactive advert server 7 is adapted to compare the user answer to the answer as specified by the Q&A metadata.
[138] If the user answer matches the answer, the interactive advert server 8 may be adapted to update the user credit amount in relation to a user account associated with the user in a user account data base 9.
[139] As alluded to above, in embodiments, the interactive advert server 8 may have identified the user prior to serving the interactive video advert so as to be able to immediately allocate the credit amount to the appropriate user account accordingly. However, should the identity of the user not be known prior to the serving of the interactive advert video, such as during first-time use and the like, prior to allocating the credit, the interactive advert server 8 may be adapted to require the user to create a user account.
[140] Having given the correct answer, the exemplary interface 19 as substantially shown in figure 3C may be provided wherein, as can be seen, the interface 19 for the user that the user has provided the correct answer.
[141] In embodiments, the interface 19 may comprise a share link 20 allowing the user to share the result with other users, such as across a social media platform.
[142] As can be seen, the interface 19 may comprise option buttons 22 wherein the user may decide whether to watch another interactive video advert or to go to the requested video or web resource originally requested.
Exemplary embodiment - receiving answer data via speech recognition [143] In embodiments, the system 1 may be configured for receiving answer data utilising speech recognition. Specifically, the client computing device 10 may comprise a microphone device (not shown) configured for receiving audio feedback from the user. In this manner, as opposed to typing answers, or selecting from a multiple-choice interface, the user may rather vocalise answers.
[144] For example, when posing the question "what colour was the Volvo car shown in the advert?). The user may vocalise the answer "blue". In this manner, the system 1 may be configured for speech to text conversion so as to be able to convert the vocalisation to the text "blue" which may then be compared against the answer data from the Q&A metadata.
[145] In preferred embodiments, the speech to text conversion is performed at the client computing device 10 so as to reduce bandwidth requirements between the interactive advert server 8. 17 PCT/AU2016/050805 wo 2017/031554
Exemplary embodiment - measuring reaction times [146] In embodiments, the system 1 may additionally be configured for measuring user reaction times and providing the correct answer. Specifically, as opposed to answering the questions upon completion of the rendering of the video, the interactive advert server 8 may be configured for continuously monitoring for an answer provided by the user.
[147] Specifically, upon initiating the play out of the advert, a prompt may be provided to the user stating "what is the colour of the Volvo car shown in the advert?". As such, while watching the video, when the user sees the blue Volvo car for the first time, the user may vocalise the answer "blue" which may be converted to text in the manner described above. At such time the interactive advert server 8 may record the time of the provision of the answer relative to the start time of the play out of the advert such that the system 1 may allocate a greater amount of credit to the fastest responses, or allocate credit to the user amongst a plurality of users who provided the answer first.
[148] In embodiments, as opposed to vocalising the answer, the user may be required to perform another type of user interface dressed for input, such as clicking a mouse, touching a touchscreen display device or the like. In these embodiments, the system 1 may additionally record the reaction time.
Exemplary embodiment - positional feedback [149] In further embodiments, the system 1 may be configured for receiving answers by way of positional feedback from the user.
[150] For example, during the play out of the video, the user may be prompted to click on Jane Fonda. As alluded to above, Jane Fonda may be recognised by the system in an automated manner utilising a facial image recognition image processing technique.
[151] In addition to having recognised Jane Fonda, the relative position of her face within the rendered video may also be recorded by the system 1. As such, the user is prompted to not only recognised Jane Fonda, but to click on her face within the video rendering 12 wherein the system 1 thereafter determines whether the relative position of the user's gesture correspond substantially with the detected location of her face.
[152] In embodiments, the user may be prompted to tap on the screen every time the user sees a new person within the video. In this regard, the metadata generator 4 may have already employed a facial recognition technique to identify human face patterns within the video data. As such, the feedback from the user may be compared to the detection times and, in embodiments, detection positions, of the faces detected by the metadata generator 4.
[153] Similarly, in other embodiments, the user may be prompted to click on the blue Volvo car. 18 PCT/AU2016/050805 wo 2017/031554 [154] In these embodiments, the user is encouraged to complete the viewing of adverts by finding various objects displayed within the video.
[155] As alluded to above, in embodiments, the objects and their respective positions may be identified by the metadata generator 4 in an automated manner utilising image recognition techniques such as text recognition, object recognition (such as by comprising colour and shape recognition techniques), facial recognition techniques and the like.
[156] In alternative embodiments, such object relative positioning metadata may be provided by the author utilising the authoring client computer device 2.
Exemplary embodiment - withdrawal of funds [157] Furthermore, in embodiments, the interface 19 may display the user's credit balance 21 as is stored within the user account data base 9. In the exemplary embodiment, the user has a balance of $5.37.
[158] Furthermore, the user may be provided with an option to withdraw the user credit balance by way of a link. Utilising the link, the user is able to create a request to withdraw the credit amount wherein, upon receipt of the withdrawal request the interactive advert server is adapted to, utilising powered management 3, initiate an electronic financial transfer in response to the withdrawal request.
Exemplary embodiment - User levels [159] In embodiments, and so as to encourage repeat user interactions with interactive video adverts, the system 1 may be adapted to implement user levels wherein, repeat or successful users are elevated to higher user levels within the system 1.
[160] For example, default users may be provided with 4 candidate answers as substantially shown in figure 3B. However, for repeat users or for users having successfully answered previous answers, the system 1 may display 3 candidate answers so as to increase the chances of providing the correct answer.
[161] In this manner, users are encouraged to repeatedly view interactive adverts.
Exemplary embodiment - selection of interactive video adverts [162] As alluded to above, in embodiments, the interactive advert server 8 may be adapted to serve interactive video adverts in a pseudorandom manner. However, in embodiments, so as to increase relevance, the interactive advert server 8 may be adapted to serve interactive adverts which may be relevant to the user. 19 PCT/AU2016/050805 wo 2017/031554 [163] In one embodiment, the interactive advert server 8 may be adapted to select interactive video adverts for serving in accordance with user demographics such as age, gender and the like. For example, females may be shown interactive adverts relating to perfume.
[164] In further embodiments, users, by manipulating the data stored within the user account data base 9 may specify interests and preferences. For example, a particular user may be interested in 4x4 vehicles.
[165] In further embodiments the interactive advert server 8 may select interactive adverts in accordance with previous user interaction performances. For example, should a user usually provide the correct answer for adverts relating to 4x4 vehicles and usually provide the incorrect answers for adverts relating to perfume, the interactive advert server 8 may ascertain that the user is more interested in 4x4 vehicles and therefore favour interactive adverts relating to 4x4 vehicles.
Exemplary embodiment - provision of hints [166] In embodiments where the user is uncertain of an answer, the user may request a hint. For example, for the question posed in figure 3B asking how many kids were in the backseat, the user may request a hint wherein the playing position of the video advert may jump to the relevant frame showing the children in the back seat of the vehicle.
[167] As such, during the advert authoring process, when posing the question, the author may specify a frame position as a hint.
Exemplary embodiment - communication with advertisers [168] In embodiments, the system 1 may be adapted to allow communication with advertisers. For example, having viewed the video advert, the interface 19 as substantially shown in Figure 3C may comprise further user interface functionality allowing the user to further communicate and interact with an advertiser.
[169] In embodiments, advertisers may offer specials, coupons and the like wherein, for example, the interface 19 may comprise a link comprising a coupon code or the like allowing the user to receive a discount of a purchase for a particular product or for the users next purchase such as a 20% discount.
[170] In a further embodiment, the interface 19 may allow viewers to provide feedback for brands, advertisers, goods and service providers and the like. For example, users may provide a feedback star rating for rating these brands, advertisers, goods and service providers and the like.
[171] In embodiments, the interface 19 may allow the user to communicate directly with the brand, such as by sending an email communication to the relevant person, initiating a web chat session or the like. 20 PCT/AU2016/050805 wo 2017/031554 [172] In embodiments, the users may utilise a "like" button. Results received from the like button may be collated for statistical generation purposes. In embodiments, the like button may be linked to a social media platform so as to share the interaction on the social media platform such as by displaying the like action on the user's social media profile, news feed or the like. In embodiments, users may be incentivised to share results on social media platforms wherein, for example, if a user shows a video advert on their social media profile, the user is rewarded such as by being allocated credit, being moved to the next highest user level and the like.
[173] In embodiments, the advertiser interaction with the user may be provided by the system 1 and a cost per interaction wherein, for example, for each user interaction with an advertiser, the advertiser pays a predetermined amount. For example, an advertiser may configure the system 1 so as to pay $1 for every coupon link use by the client, communication received from the client, feedback received from the client and the like. In embodiments, the client may receive a portion of the amount paid by the advertiser such that, for example, for completing a feedback survey, the client would receive $.50 of the $1 paid by the advertiser.
[174] In embodiments, the system 1 may be adapted to implement a bidding system wherein advertisers bid for preferential display of their video adverts such that, for example, advertisers who bid the greatest amount have their respective video advert displayed first either in time or position. For example, for a user you may watch 4-5 videos on average, the advertisers bidding the highest amount would have their respective video adverts displayed first.
[175] In embodiments, the system 1 may be configured such that only those advertisers bidding greater than a certain bidding threshold would have the ability to have their video adverts shared via social media platforms utilised in the above-mentioned social media like button. For example, the system 1 may be configured such that only for those advertisers bidding greater than $1 for the display of a video advert will have the social media share button displayed on the interface 19. In alternative embodiments, as opposed to utilising a threshold, the system 1 may implement the social media platform share functionality for a particular highest bidder percentile such as, for example, enabling the social media platform share button for the 10% highest bidders.
[176] In embodiments, users may configure the system 1 with their preferred or favourite brands such that the system 1 subsequently favours the display of video adverts from these brands. For example, upon completion of a video advert, the interface 19 may allow the user to specify whether the user would like to see more adverts from the relevant advertiser, brand or the like.
[177] It should be appreciated that the system 1 may be utilised as an online advertising network is an alternative to traditional online advertising methodologies such as search engine pay per click advertising and the like. 21 PCT/AU2016/050805 wo 2017/031554
Interpretation
Wireless: [178] The invention may be embodied using devices conforming to other network standards and for other applications, including, for example other WLAN standards and other wireless standards. Applications that can be accommodated include IEEE 802.11 wireless LANs and links, and wireless Ethernet.
[179] In the context of this document, the term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. In the context of this document, the term "wired" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a solid medium. The term does not imply that the associated devices are coupled by electrically conductive wires.
Processes: [180] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining", "analysing" or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.
Processor: [181] In a similar manner, the term "processor" may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A "computer" or a "computing device" or a "computing machine" or a "computing platform" may include one or more processors.
[182] The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing 22 PCT/AU2016/050805 wo 2017/031554 system that includes one or more processors. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM.
Computer-Readable Medium: [183] Furthermore, a computer-readable carrier medium may form, or be included in a computer program product. A computer program product can be stored on a computer usable carrier medium, the computer program product comprising a computer readable program means for causing a processor to perform a method as described herein.
Networked or Multiple Processors: [184] In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
[185] Note that while some diagram(s) only show(s) a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
Additional Embodiments: [186] Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that are for execution on one or more processors. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause a processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a 23 PCT/AU2016/050805 wo 2017/031554 computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.
Carrier Medium: [187] The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an example embodiment to be a single medium, the term "carrier medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "carrier medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media.
Implementation: [188] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.
Means For Carrying out a Method or Function [189] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a processor device, computer system, or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.
Connected [190] Similarly, it is to be noticed that the term connected, when used in the claims, should not be interpreted as being limitative to direct connections only. Thus, the scope of the expression a device A connected to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Connected" may mean 24 PCT/AU2016/050805 wo 2017/031554 that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.
Embodiments: [191] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.
[192] Similarly it should be appreciated that in the above description of example embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description of Specific Embodiments are hereby expressly incorporated into this Detailed Description of Specific Embodiments, with each claim standing on its own as a separate embodiment of this invention.
[193] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Different Instances of Objects [194] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
Specific Details [195] In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In 25 PCT/AU2016/050805 wo 2017/031554 other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Terminology [196] In describing the preferred embodiment of the invention illustrated in the drawings, specific terminology will be resorted to for the sake of clarity. However, the invention is not intended to be limited to the specific terms so selected, and it is to be understood that each specific term includes all technical equivalents which operate in a similar manner to accomplish a similar technical purpose. Terms such as "forward", "rearward", "radially", "peripherally", "upwardly", "downwardly", and the like are used as words of convenience to provide reference points and are not to be construed as limiting terms.
Comprising and Including [197] In the claims which follow and in the preceding description of the invention, except where the context requires otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" are used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence or addition of further features in various embodiments of the invention.
[198] Any one of the terms: including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.
Scope of Invention [199] Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as fall within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.
[200] Although the invention has been described with reference to specific examples, it will be appreciated by those skilled in the art that the invention may be embodied in many other forms. 26

Claims (26)

  1. Claims
    1. A method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates, the method comprising retrieving video advert data of a video advert from a video advert database; performing automated content analysis of the video advert data; automating Q&A meta data generation in accordance with the automated content analysis, the Q&A meta data comprising question meta data and associated answer meta data; associating the Q&A meta data with the video advert data; serving, to a client computing device, the video advert video advert and wherein the client computing device displays question data derived from the Q&A meta data and wherein the client computing device is configured for receiving user answer data from a user; and comparing the user answer data to the answer meta data.
  2. 2. A method as claimed in claim 1, wherein the content analysis comprises analysis of speech to text data derived from the video advert data.
  3. 3. A method as claimed in claim 1, wherein the content analysis comprises image recognition analysis of the video advert data.
  4. 4. A method as claimed in claim 3, wherein the image recognition analysis comprises text recognition configured for identifying text displayed within the video advert.
  5. 5. A method as claimed in claim 3, wherein the image recognition analysis comprises object recognition configured for identifying an object displayed within the video advert.
  6. 6. A method as claimed in claim 5, wherein object recognition comprises reference image correlation.
  7. 7. A method as claimed in claim 5, wherein object recognition comprises shape detection.
  8. 8. A method as claimed in claim 5, wherein object recognition comprises colour detection.
  9. 9. A method as claimed in claim 3, wherein the image recognition analysis comprises facial recognition.
  10. 10. A method as claimed in claim 9, wherein the image recognition analysis is configured for recognising a number of faces within the video advert.
  11. 11. A method as claimed in claim 3, wherein the image recognition analysis is configured for determining a timestamp at which a face is detected.
  12. 12. A method as claimed in claim 3, wherein the image recognition analysis comprises person recognition analysis.
  13. 13. A method as claimed in claim 12, wherein the person recognition analysis is configured for performing a correlation analysis of a face profile database.
  14. 14. A method as claimed in claim 12, wherein the person recognition analysis is configured for determining a timestamp at which a person is identified.
  15. 15. A method as claimed in claim 12, wherein the content analysis further comprises speech to text conversion and wherein the content analysis is configured for associating converted speech to text with an identified person.
  16. 16. A method as claimed in claim 1, wherein content analysis comprises matching the video advert data content against matching data.
  17. 17. A method as claimed in claim 16, wherein the matching data comprises a at least one keyword and wherein the content analysis comprises identifying content of the video data matching the at least one keyword.
  18. 18. A method as claimed in claim 17, wherein the question meta data generated by the Q&A meta data generation comprises author provided Q&A meta data.
  19. 19. A method as claimed in claim 18, wherein the question meta data generated by the Q&A meta data generation comprises author provided Q&A meta data.
  20. 20. A method as claimed in claim 1, wherein the question meta data generated by the Q&A meta data generation comprises a video play out timestamp and wherein comparing the answer data to the answer meta data comprises comparing a timestamp at which the answer data is received to the video playout timestamp.
  21. 21. A method as claimed in claim 1, wherein the question meta data generated by the Q&A meta data generation comprises a relative position within a rendering frame of the video data and wherein comparing the answer data to the answer meta data comprises comparing a user provided relative position with the relative position of the Q&A meta data.
  22. 22. A method as claimed in claim 21, wherein the relative position of the Q&A meta data is generated utilising object recognition.
  23. 23. A method as claimed in claim 1, wherein the Q&A meta data generation comprises application of a rule from a set of rules.
  24. 24. A method as claimed in claim 23, wherein the set of rules comprise hierarchical rules.
  25. 25. A method as claimed in claim 1, wherein the Q&A meta data generation comprises source content analysis.
  26. 26. A method as claimed in claim 25, wherein the source content analysis comprises analysis of web content.
AU2016310418A 2015-08-26 2016-08-26 A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates Pending AU2016310418A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AU2015903461A AU2015903461A0 (en) 2015-08-26 A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates and facilitating subsequent advertiser-user communications
AU2015903461 2015-08-26
PCT/AU2016/050805 WO2017031554A1 (en) 2015-08-26 2016-08-26 A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates

Publications (1)

Publication Number Publication Date
AU2016310418A1 true AU2016310418A1 (en) 2017-06-29

Family

ID=58099350

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2016310418A Pending AU2016310418A1 (en) 2015-08-26 2016-08-26 A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates

Country Status (2)

Country Link
AU (1) AU2016310418A1 (en)
WO (1) WO2017031554A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070106557A1 (en) * 2001-04-12 2007-05-10 Kivin Varghese Advertisements with Compensation for Attention
US7640272B2 (en) * 2006-12-07 2009-12-29 Microsoft Corporation Using automated content analysis for audio/video content consumption
US20100138852A1 (en) * 2007-05-17 2010-06-03 Alan Hirsch System and method for the presentation of interactive advertising quizzes
US8887048B2 (en) * 2007-08-23 2014-11-11 Sony Computer Entertainment Inc. Media data presented with time-based metadata
US20100162289A1 (en) * 2008-12-22 2010-06-24 General Instrument Corporation Method and apparatus for providing subscriber incentives to view advertising that accompanies programming content delivered over a content delivery system
US9554184B2 (en) * 2012-12-04 2017-01-24 24/7 Customer, Inc. Method and apparatus for increasing user engagement with video advertisements and content by summarization

Also Published As

Publication number Publication date
WO2017031554A1 (en) 2017-03-02

Similar Documents

Publication Publication Date Title
US20220351238A1 (en) System and method for digital advertising campaign optimization
US8850328B2 (en) Networked profiling and multimedia content targeting system
US8561097B2 (en) Multimedia content viewing confirmation
US8607143B2 (en) Multimedia content viewing confirmation
US8484563B2 (en) View confirmation for on-demand multimedia content
US20080288349A1 (en) Methods and systems for online interactive communication
US20110029365A1 (en) Targeting Multimedia Content Based On Authenticity Of Marketing Data
US20120130802A1 (en) Systems, methods and apparatus to design an advertising campaign
US20220076295A1 (en) Systems and methods for communicating with devices with a customized adaptive user experience
US10897638B2 (en) Generation apparatus, generation method, and non-transitory computer readable storage medium
US9269094B2 (en) System and method for creating and implementing scalable and effective surveys and testing methods with human interaction proof (HIP) capabilities
US8448204B2 (en) System and method for aggregating user data and targeting content
EP2779074A1 (en) System and method for statistically determining bias in online survey results
US20160225021A1 (en) Method and system for advertisement retargeting based on predictive user intent patterns
US20150242518A1 (en) Systems and methods for closed loop confirmation of user generated content
Semerádová et al. The (in) effectiveness of in-stream video ads: Comparison of facebook and youtube
KR20150098265A (en) Advertisement system for attracting participation of smartphone users and method thereof
US9930424B2 (en) Proxy channels for viewing audiences
WO2013016869A1 (en) Delivery of two-way interactive content
KR102402551B1 (en) Method, apparatus and computer program for providing influencer searching service
US20220038757A1 (en) System for Real Time Internet Protocol Content Integration, Prioritization and Distribution
AU2016310418A1 (en) A web server implemented method for interactive video advert authoring, serving and user interaction monitoring for increasing video advert viewing completion rates
US10157401B1 (en) Engaged view rate analysis
EP4002254A1 (en) Interconnection method for users and data and related system
US20230351442A1 (en) System and method for determining a targeted creative from multi-dimensional testing

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: APPLICATION IS TO PROCEED UNDER THE NUMBER 2016102346.