WO2014108195A1 - Method and system for targeting delivery of video media to a user - Google Patents

Method and system for targeting delivery of video media to a user Download PDF

Info

Publication number
WO2014108195A1
WO2014108195A1 PCT/EP2013/050416 EP2013050416W WO2014108195A1 WO 2014108195 A1 WO2014108195 A1 WO 2014108195A1 EP 2013050416 W EP2013050416 W EP 2013050416W WO 2014108195 A1 WO2014108195 A1 WO 2014108195A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
video segment
reaction
video
interests
Prior art date
Application number
PCT/EP2013/050416
Other languages
French (fr)
Inventor
Michael Huber
Vincent Huang
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Priority to PCT/EP2013/050416 priority Critical patent/WO2014108195A1/en
Publication of WO2014108195A1 publication Critical patent/WO2014108195A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the present invention relates to a method and a system for targeting delivery of video media to a user.
  • the invention also relates to a computer program product configured to implement a method for targeting delivery of video media to a user.
  • Providers of media content frequently aim to target delivery of their content towards customers who are most likely to be receptive to the content. This is most prevalent in the placement of advertising content, where providers seek to target adverts towards those most likely to be susceptible to the advertising. This practice is common for example in print media, with the placement of adverts in relevant publications and/or in publications popular with target customers.
  • a method for targeting delivery of video media to a user comprising displaying a video segment to a user via a user equipment, determining a reaction of the user to the video segment, inferring user interests from the user's reaction to the video segment and selecting a video segment for future display according to the inferred user interests.
  • aspects of the present invention thus determine realtime user reaction to a segment of video, and use this reaction to infer user interests and so select video segments for future delivery according to those interests.
  • a video segment comprises a discrete unit of video content.
  • the length of a segment may vary from for example a few seconds to several minutes to an hour or more.
  • Examples of video segments may comprise advertising commercials, movie clips or any other video content.
  • Examples of user equipment via which video segments may be displayed to a user may comprise mobile phones, tablet computers, personal computers, televisions, set top boxes and other devices.
  • the step of selecting a video segment for future display may comprise selecting a plurality of segments for future display.
  • the step of determining a reaction of the user may comprise determining a single reaction or may comprise determining a plurality of reactions which may or may not be interrelated.
  • the selected video segment may be displayed to the user, and the steps of determining a user reaction, inferring user interests and selecting a video segment for future display may be repeated.
  • the method may allow for repeated display of selected video segments and consequent repeated refining of inferred user interests.
  • the method may further comprise classifying the user according to the inferred user interests.
  • selecting a video segment for future display may comprise selecting a video segment targeted to the user classification.
  • a range of video segments suitable for different user classifications may be prepared, from which selection of segments for future display may be made.
  • the method may further comprise generating a user profile based on the inferred user interests.
  • An individual user profile may be developed for each user, which profile may indicate their interests and allow selection of appropriate video segments for future display.
  • user classification and individual or group user profiles may be used together in the selection of video segments for future display.
  • the video segment may comprise an advertisement.
  • determining a reaction of the user to the video segment may comprise registering user reaction data during display of the video segment, associating the user reaction data with corresponding video content from the video segment, and reasoning user reaction from the associated data and content.
  • the video content may be described with metadata, and associating user reaction data with corresponding video content may comprise correlating user reaction data with the video content metadata.
  • determining a reaction of the user to the video segment may comprise registering user interaction with the user equipment.
  • Examples of user interaction with the user eq uipment may comprise user eye focus on the user equipment, changes in user separation from the user equipment, screen touches or button presses.
  • registering user interaction with the user equipment may comprise identifying whether or not the user is watching the video segment on the user equipment.
  • registering user interaction with the user equipment may comprise tracking what content within the video segment is receiving the user's attention. Such tracking may be accomplished using for example eye tracking equipment.
  • registering user interaction with the user equipment may comprise registering user movement relative to the user equipment.
  • User movement relative to user equipment may for example include increasing or decreasing separation from the user equ ipment or touching a screen or button of the user equipment.
  • determining a reaction of the user to the video segment may comprise registering physiological responses of the user during display of the video segment.
  • An example of a physiological response may include pupil response of the user.
  • selecting a video segment for future display may comprise selecting a video segment whose content relates to the inferred interest.
  • a selected video segment may be closely or remotely related to the inferred interest, or may for example concern a subset of the inferred interest.
  • a computer program product configured, when ru n on a computer, to implement a method according to the first aspect of the present invention.
  • the computer program product may be stored on a computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal, or it could be in any other form.
  • the computer program product may be incorporated in core network processors or in user equipment devices.
  • the computer program product may comprise several sub programs, some of which may be incorporated within a core network processor and others of which may be incorporated within user equipment. Some or all of the computer program product may be made available via download from the internet.
  • a system for targeting delivery of video media to a user comprising a display unit configured to display a video segment to a user, a determining unit configured to determine a reaction of the user to the video segment, an inferring unit configured to infer user interests from the user's reaction to the video segment, and a selection unit configured to select a video segment for future display according to the inferred user interests.
  • the system may be realised within a user equipment or may be realised between a user equipment and one or more network processors such as a core network processor.
  • Units of the system may be functional units which may be realised in any combination of hardware ad/or software.
  • the determining unit of the system may comprise a registering unit configured to register user reaction data during display of the video segment, an associating unit, configu red to associate user reaction data with corresponding video content from the video segment, and a reasoning unit, configured to reason user reaction from the associated data and content.
  • the associating unit may be configured to associate user reaction data with corresponding video content from the video segment by synchronising time stamped metadata describing the video content with corresponding user reaction data.
  • the registering unit may be configured to register user reaction data of different kinds, including for example data concerning user interaction with a user equipment, or user physiological response data.
  • the registering unit may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equipment and may supply the interaction or response data.
  • At least part of the system may be realised within a network apparatus.
  • a display unit and determining unit of the system may be realised within a user equipment, with an inferring unit and selection unit realised within a network apparatus, such as for example a core network node.
  • a user apparatus comprising a display unit configured to display a video segment to a user and a determining unit configured to determine a reaction of the user to the video segment, the apparatus further comprising a transmitting unit configured to transmit the determined reaction to a network apparatus.
  • a network apparatus comprising an inferring unit configured to infer user interests from a user reaction received from a user apparatus, and a selection unit configured to select a video segment for future display according to the inferred user interests.
  • Figure 1 is a flow chart illustrating steps in a method for targeting delivery of video media to a user.
  • Figure 2 is a block diagram illustrating functional units in a system for targeting delivery of video media to a user.
  • Figure 3 is a flow chart illustrating steps in another example of a method for targeting delivery of video media to a user.
  • Figure 4 is a block diagram illustrating functional units in another example of a system for targeting delivery of video media to a user.
  • Figure 5 is a block diagram illustrating functional units in another example of a system for targeting delivery of video media to a user, the system being distributed between user equipment and a core network.
  • Figure 1 illustrates steps in a method 100 for targeting delivery of video media to a user.
  • the method 100 comprises displaying a video segment to a user via a user equipment.
  • the method then proceeds, at step 120 to determine a reaction of the user to the video segment.
  • the method infers user interests from the user's reaction to the video segment.
  • the method selects a video segment for future display to the user according to the inferred user interests.
  • examples of the method 100 enable selection of video segments for future display to a user based on monitored user reaction to previously displayed video segments.
  • the targeting of delivery of video media is thus based on highly relevant data reflecting the user's response to previously displayed video segments.
  • each video segment displayed to the user provides information that may inform future video selections, allowing new user interests to be discovered and established interests to be explored in detail.
  • the user equipment via which the video segment is displayed to the user may be any type of user equipment suitable for such display.
  • suitable user equipment may include for example a mobile phone, tablet computer, personal computer, television or television and set top box combination.
  • a video segment may be displayed via a streaming protocol to a user accessing the streamed media via a mobile phone or tablet computer.
  • the streamed media may be made available via the internet which may be accessed by the user via a distributed telecommunications network.
  • the video segment may be displayed via a television to a user in a home or work situation.
  • a video segment comprises a discrete unit of video content.
  • the length of a segment may vary from for example a few seconds to several minutes to an hour or more.
  • Examples of video segments include advertising commercials, extracts or clips from films or television programs etc.
  • the video segments may comprise product or service advertising, entertainment media, factual media, news, current affairs or any other type of video media.
  • step 120 determines a reaction of the user to the video segment.
  • User reaction may be determined in a variety of ways, examples of which are discussed below.
  • determining a user reaction to the video segment at step 120 comprises (i) registering user reaction data during display of the video segment, (ii) associating registered user reaction data with corresponding video content, and (iii) reasoning user reaction from the corresponding reaction data and video content.
  • User reaction data encompasses a wide range of information relating to user reaction, examples of which are discussed below.
  • a first example of user reaction data is data concerning user interaction with the user equipment.
  • a user may interact with the user equipment on which a video segment is displayed in many different ways, all of which may provide insight into the user's response to the video content displayed on the user equipment.
  • a first example of interaction data may comprise presence or absence in front of the screen displaying the video content.
  • Various techniques such as face detection or presence detection may establish whether a user is present in front of the screen and therefore likely to be observing the content. This may be equally applicable for a mobile device such as a phone or tablet as for a stationary device such as a television.
  • a user of a mobile device may hold the device in front of their face in order to watch a video segment displayed on the screen.
  • Face detection equipment comprising a camera or other suitable sensor and appropriate software detects the user face within an appropriate capture range of the screen.
  • the face detection equipment registers that the user face is no longer present within the capture range of the screen, and hence the user is probably not watching the displayed video segment.
  • the camera or other sensor may be incorporated into the mobile device or may be remote from but in communication with the mobile device.
  • a user of a television may be watching a video segment from a seated position. Face or presence detection equipment in communication with the television registers the user's presence in front of the television, suggesting that the user is watching the video segment. If the user should choose to get up and leave the area of the television, the absence of the user is registered by the face or presence detection equipment, indicating that the user has elected not to watch the video segment being displayed.
  • interaction data may comprise distance between a user and their user equipment, and particularly changes in this distance caused by user movement or user equipment movement.
  • Suitable sensors may measure user separation from a user equipment, or from a display screen of the user equipment, and hence may register changes in this distance reflecting a user approaching or moving away from the equipment. In the case of a mobile device the distance changes may reflect the user moving the device closer or further away from a face or eyes of the user.
  • a further example of interaction data may comprise user engagement with interactive elements on the user equipment. This may for example include screen touches for a touch screen controlled device, or button presses for other forms of control.
  • a still further example of interaction data may comprise user eye focus.
  • Eye tracking equipment enables the registering of user eye focus, to discern what a user is looking at.
  • eye tracking equipment may be used to determine what part of a display screen a user is looking at at any given time.
  • user reaction data comprises user physiological responses such as pupil response.
  • Pupil response including dilation and contraction of the pupils of a user, can be measured by appropriate sensing equipment.
  • User reaction can be inferred by assessing such physiological response data, as discussed in further detail below. It will be appreciated that the above examples of user reaction data are not exhaustive, but merely representative of the different types of user reaction data which may registered.
  • examples of the method proceed to associate the registered user reaction data with the video content to which they correspond, that is the video content which provoked the registered reaction.
  • the video content may be described by meta data, which may include object description and location as well as time stamp information. This meta data may by synchronised with the registered user reaction data to allow association of reaction data with video content.
  • meta data may include object description and location as well as time stamp information.
  • This meta data may by synchronised with the registered user reaction data to allow association of reaction data with video content.
  • meta data may include object description and location as well as time stamp information.
  • This meta data may by synchronised with the registered user reaction data to allow association of reaction data with video content.
  • user absence from in front the user equipment may be associated with the video content that was displayed in the seconds or minutes immediately preceding the user's absence, or displayed during the user's absence.
  • eye tracking data user eye focus on a particular area of the screen may be associated with the content that was displayed in that part of the screen at the time the eye focus was registered.
  • the method proceeds to reason user reaction from the associated reaction data and video content.
  • the nature and detail of user reaction which may be reasoned is dependent upon the nature and quantity of the reaction data available. Considering the examples above, in the case of presence/absence data, a substantially binary interested/not interested reaction can be reasoned from a user's decision to remain or become absent from in front of a screen during or following display of certain content.
  • a scenario where such reasoning may be implied could include a home television situation at the moment of a commercial break. A user who chooses to move away from the television at the start of a commercial break may be reasoned to be uninterested in the subject of the first advertisement displayed during the commercial break.
  • Data concerning user separation from a display screen may provide further insight into user reaction, with a choice to approach a display screen during display of certain video content indicating an increased interest in the content displayed at the time of approach. This may be equally applicable for a mobile device user who brings the device closer to the face during display of certain content, and for a stationary device user who chooses to approach for example the television during display of certain content. It may be reasoned that the user desired to see certain content more clearly, indicating interest in that content. Eye focus on particular content may also be reasoned as an expression of interest in that particular content, while the nature of a user's interest may be reasoned from for example pupil response data. Pupil contraction may indicate distaste or negative interest, while pupil dilation may indicate pleasure or favourable interest.
  • a more complete and nuanced picture of user reaction may be achieved by combining information from several different registered reaction data, as well as monitoring the evolution of reaction data over time.
  • the following example scenarios illustrate how a picture of user reaction to video content may be developed from combined reaction data.
  • user eye focus and pupil response data are registered, indicating that user focus was generalised over the screen up to time t1 , at which time user eye focus became concentrated at the top right of the screen and a pupil dilation was recorded. From this data it may be reasoned that the content displayed at the top right of the screen at time t1 prompted a reaction of favourable interest from the user. Intense eye focus combined with pupil contraction would have been reasoned as negative interest in the relevant content.
  • the step 120 of determining a reaction of the user to the video segment may comprise determining a range of user reactions to different content within the video segment.
  • a single reaction such as interested/not interested may be registered, or a plurality of partial reactions indicating positive or negative interest in different parts of the video content displaced may be determined.
  • the method proceeds at step 140 to infer user interests from the user's reaction to the video segment.
  • the degree to which user interests may be inferred is dependent upon the reactions that have been determined at step 130. For example, a simple "not interested" reaction to a video segment indicates that a user is uninterested in subject matter corresponding to the content of the video segment. Thus lack of interest in a video segment concerning dogs enables inference that a user is not interested in dogs. Alternatively, should a user have shown a reaction of positive interest in a Labrador during display of a video segment in which dogs were not the main subject, a user interest in dogs may be inferred.
  • Positive interest in a Labrador included as part of a video segment displaying several dogs of different breeds enables inference of a user interest in dogs and a particular interest in the Labrador breed.
  • further detail of user interests may be inferred by considering determined user reactions to a video segment in the context of determined reactions to previous video segments.
  • an inferred general interest in dogs may be refined by consideration of later reactions of positive interest or lack of interest in particular breeds of dog or aspects of dog ownership including nutrition, training, competing etc.
  • This evolving picture of user interests may be assembled as part of a user profile and/or may be used to classify users into different categories. The use of user classification and profiles is discussed in further detail below with reference to Figures 3 and 4.
  • the method 1 00 selects a video segment for future display according to the inferred user interests.
  • Selecting a video segment according to the inferred user interests may comprise selecting a video clip that appeals to the inferred user interests or that serves to provide further information about the user interests, thus the selection may be made according to the user interests for a substantially commercial purpose or for an information gathering purpose.
  • a commercial purpose may include selecting targeted advertising material that matches the user's inferred interests. Such material may be selected for future delivery having identified that the user's inferred interests make them likely to be responsive to the advertisement.
  • Another example of commercial purpose may include selecting a video segment likely to appeal to the user's interests, and thus demonstrating understanding of user requirements and providing improved user experience.
  • Selecting a video segment for information gathering may for example comprise selecting a video segment that is related to an inferred interest, thus exploring possible additional related interests the user may have.
  • a video segment may be selected that is totally unrelated to inferred interests in order to test or identify possible new areas of interest.
  • a video segment may be selected in order to gain additional insight into an established interest, possibly relating to sub categories of that interest. For example, having established that a user has an interest in dogs, video segments relating to dog ownership, dog training, working dogs, dog breeding etc may be selected, thus allowing user reaction to these video segments to be determined and more detailed information about the user's interest to be inferred.
  • interests inferred from reactions to video segments displaying content of one type may inform selection of video segments displaying content of a different type or of the same type.
  • interest inferred through reaction to an advertisement may prompt selection of future advertisements or may inform selection of different video types such as movie clips etc.
  • interests inferred from reactions to a movie clip may inform selection of appropriate adverts for display to the user.
  • Other variations according to subject matter may be envisaged.
  • the method 100 of Figure 1 may be realised by a computer program which may cause a system, processor or apparatus to execute the steps of the method 100.
  • Figure 2 illustrates functional units of a system 200 which may execute the steps of the method 1 00 , for example according to computer readable instructions received from a computer program.
  • the system 200 may for example be realised in one or more processors, system nodes or any other suitable apparatus.
  • the system 200 comprises a display unit 210, a determining unit 220, an inferring unit 230 and a selection unit 240. It will be understood that the units of the system are functional units, and may be realised in any appropriate combination of hardware and/or software.
  • the display unit 210, determining unit 220, inferring unit 230 and selection unit 240 may be configured to carry out the steps of the method 100 substantially as described above.
  • the display unit 210 may be configured to display a video segment to a user and the determining 220 unit may be configured to determine a reaction of the user to the displayed video segment.
  • the inferring unit 230 may be configured to infer user interests from the reactions determined by the determining unit 220, and the section unit 240 may be configured to select a video segment for future display according to the inferred user interests.
  • the display unit 210 may comprise or communicate with a display screen or other display device suitable for displaying a video segment to a user.
  • the determining unit 220 may be configured to register user reaction data of different kinds, including for example data concerning user interaction with a user equipment, or user physiological response data.
  • the determining unit 220 may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equ ipment and may su pply the interaction or response data.
  • the determining unit may also be configured to record or receive meta data describing video content displayed to the user and to synchronise the meta data with the user reaction data in order to associate user reaction with displayed video content.
  • the inferring unit 230 may be configured to apply machine learning or other algorithms to infer user interests from the determined user reactions.
  • the selection unit 240 may be configured to select a video segment for future display according to a range of criteria concerning the inferred user interests, including for example appealing to those interests or discovering more information concerning established or possible new interests. The precise criteria may be determined by a system operator.
  • examples of methods according to the present invention may incorporate use of a user profile or user classification system.
  • a user profile stores information inferred about user interests and may be continually updated with new insight gathered through analysis of user reactions to new video content.
  • a user classification system may be based upon user interest types or trends and may for example be enhanced by customer relations management data concerning the user, including for example user attributes and user network data.
  • Selection of a video segment for future display according to inferred user interests may involve consu lting a user profile and/or user classification . For video segment selections aimed at information gathering, selections may for example be directed towards narrowing down a user's interests, checking for evolution in user interests over time or seeking to identify new user interests.
  • periodic calibration and/or updating of user profiles and/or classification may be conducted by selecting test segments to identify evolution in user interests over time.
  • Specific profiles and or information types may be targeted in order to more fully populate a user profile or classify a particular user. This information may then serve to enable increasingly accurate selection of video segments for commercial purposes, either in the delivery of advertising material or the suggestion of entertainment or information material.
  • Figure 3 illustrates another example of a method for targeting delivery of video media to a user.
  • the method of Figure 3 illustrates how the steps of the method 100 may be further subdivided in order to realise the functionality described above.
  • Figure 3 also illustrates an example of an additional step which may be incorporated into the method 100 according to different examples of the invention.
  • a video segment is displayed to a user via a user equipment.
  • user reaction data is registered durin g d i s pl ay of th e vi d eo seg m ent a n d i s associated with the corresponding video content using time stamped meta data, user reaction to the video content is then reasoned from the registered reaction data and the associated video content.
  • user interests are inferred from the determined reactions at step 130.
  • a user profile is then updated according to the user interests at step 1 35.
  • this updating may also comprise classifying a user into a particular user classification, category or personality type.
  • a new video segment for future display to the user is selected according to the inferred user interests as reflected in the user profile and/or classification .
  • the segment for future display may be selected according to commercial or information gathering requirements.
  • the new video segment may for example be selected from a prepared group of video segments considered to be appropriate for a particular user classification or personality type.
  • the selected video segment is then displayed to the user, the method returning to step 1 10 to follow the same process for the new video segment.
  • Display of the new video segment may take place immediately following display of the first video segment, allowing realtime reaction to inferred user interests, and rapid convergence upon a particular user classification or rapid completion of at least part of a user profile.
  • the selected segment may be displayed to the user at some future date or time.
  • selected video segments in the form of advertisements may be continually displayed for the duration of a commercial break in a television program or movie. As the end of the commercial break is reached, the next selected video segment is held over for display at the start of the next commercial break.
  • the method 1 00 of Figure 3 may be realised by a computer program which may cause a system, processor or apparatus to execute the steps of the method 100.
  • Figure 4 illustrates how the system 200 of Figure 2 may be modified to provide the additional functionality discussed with reference to Figure 3.
  • the system 200 may for example be realised in one or more processors, system nodes or any other suitable apparatus.
  • the determining unit 220 of the system 200 additionally comprises a registering unit 220a, an associating unit 220b and a reasoning unit 220c.
  • the units are functional units, and may be realised in any appropriate combination of hardware and/or software.
  • the registering unit 220a, associating unit 220b and reasoning unit 220c may be configured to provide the functionality discussed above with reference to steps 120a, 120b and 120c of the method 100.
  • the registering unit 220a may be configured to register user reaction data of different kinds, including for exam ple data concern ing user interaction with a user eq ui pment, or user physiological response data.
  • the registering unit may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equipment and may supply the interaction or response data. Examples of suitable sensors may include a camera, distance or movement sensor.
  • each user equipment device comprises a display unit 310 and a determining unit 320.
  • the core network comprises an inferring unit 330 and a selection unit 340.
  • the display units 310 and determining units 320 at the user equipment display video segments received from the core network and determine user reaction to those video segments.
  • the determined user reactions are then sent to the core network, where individual user interests are inferred by the inferring unit 330, user profiles may be stored and updated, and selection of video segments for future display to the users is made by the selection unit 340.
  • the selected video segments may then be transmitted to the user equipment devices 400 for future display.
  • the system 300 may be implemented by dedicated user and network apparatus.
  • the user apparatus may comprise display and determining units which may be incorporated into a user equipment.
  • the display unit may comprise a display screen or other display apparatus, or may be in communication with a display screen of the user apparatus.
  • the network apparatus may comprise an inferring unit and a selection unit.
  • the determining unit may be divided between the user apparatus and the network apparatus, with for example, registering of reaction data and association with corresponding video content taking place ion the user apparatus. This data may then be sent to the network apparatus for reasoning of user reactions as well as the subsequent inferring of user interests and selection of video segment or segments for future display. Examples of how a method and system of the present invention may be applied to different user situations are described below.
  • a user may be watching television programs via a television that is in communication with a communications network.
  • the television is equipped with suitable sensing devices such as a camera, distance sensor etc.
  • suitable sensing devices such as a camera, distance sensor etc.
  • user reactions to an advertisement are determined and sent to a core network processor.
  • the network processor infers user interests from the user reactions, and saves those interests in a user profile.
  • the network processor selects a video segment for subsequent display to the user according to those user interests.
  • the selection is made with the aim of gathering as complete a picture as possible of the user interests, for example testing different areas of potential user interest and/or gathering more detailed information about a particular area of interest.
  • the selected video segment in the form of a new advertisement is then sent to the television for display to the user.
  • User reactions are again determined and sent to the core network for analysis and the process continues.
  • the next selected advertisement is held over for display at the start of the next commercial break.
  • the core network has gathered sufficient information on user interests to propose advertisements corresponding to user interests and hence to which the user may be most susceptible.
  • the focus of video selection may periodically revert to information gathering in order to ensure that evolution in user interests is adequately captured.
  • a user may be watching streamed video content on a mobile device.
  • the device is equipped with suitable sensing devices such as a camera, distance sensor etc.
  • the user's reaction to the segment is determined at the mobile device and sent to the core network via which the streamed video media is accessed.
  • the core network infers user interests from the user reactions, and saves those interests in a user profile.
  • the network processor selects a video segment for subsequent display to the user according to those user interests.
  • the selected video segment may subsequently be displayed to the user or suggested to the user, allowing the user to elect whether or not to watch the segment.
  • Selection of the new video segment may be directed towards gathering further information as to the user interests or may be intended merely to reflect those interests, providing an enhanced user experience.
  • User reaction during display of the selected video segment is determined and the process continues, gathering an increasingly complete and detailed picture of user interests, and so being able to select with increasing accuracy and success those video segments which will be interesting to the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method for targeting delivery of video media to a user is disclosed. The method comprises the steps of displaying a video segment to a user via a user equipment (step 10), determining a reaction of the user to the video segment (step 120), inferring user interests from the user's reaction to the video segment (step 130), and selecting a video segment for future display according to the inferred user interests (step 140). Also disclosed are a computer program product and system for targeting delivery of video media to a user.

Description

Method and System for Targeting Delivery of Video Media to a User
Technical Field The present invention relates to a method and a system for targeting delivery of video media to a user. The invention also relates to a computer program product configured to implement a method for targeting delivery of video media to a user.
Background
Providers of media content frequently aim to target delivery of their content towards customers who are most likely to be receptive to the content. This is most prevalent in the placement of advertising content, where providers seek to target adverts towards those most likely to be susceptible to the advertising. This practice is common for example in print media, with the placement of adverts in relevant publications and/or in publications popular with target customers.
The rapid expansion of internet access, and accompanying development of distributed telecommunication networks, has led to a huge increase in the means by which media content may be delivered to customers. Efficiently targeting delivery of such content to appropriate users remains an important aim for network operators and service providers. One way in which media content may be targeted towards appropriate users is by gathering information about individual users, and selecting content for delivery according to particular user attributes. Network operators and service providers retain data concerning individual users including personal data, service history, details of user equipment devices etc, all of which may be used to infer information about user preferences and hence select appropriate media content for delivery to the user. Network data such as calls made/received and connections between different users can also provide insight which may be used to target delivery of media to appropriate consumers.
While offering improvements over the purely random selection of media content for user delivery, insight based on user attribute and network data is necessarily limited in scope. Such data can provide only a partial and approximate indication of what may be very wide ranging and changing user interests and preferences. Inaccuracies and missed opportunities in the targeting of media content are inevitable consequences of the incomplete nature of the insight provided by the available data.
Summary
It is an aim of the present invention to provide a method and system which obviate or reduce at least one or more of the disadvantages mentioned above.
According to a first aspect of the present invention, there is provided a method for targeting delivery of video media to a user, comprising displaying a video segment to a user via a user equipment, determining a reaction of the user to the video segment, inferring user interests from the user's reaction to the video segment and selecting a video segment for future display according to the inferred user interests. Aspects of the present invention thus determine realtime user reaction to a segment of video, and use this reaction to infer user interests and so select video segments for future delivery according to those interests.
For the purposes of the present specification, a video segment comprises a discrete unit of video content. The length of a segment may vary from for example a few seconds to several minutes to an hour or more. Examples of video segments may comprise advertising commercials, movie clips or any other video content. Examples of user equipment via which video segments may be displayed to a user may comprise mobile phones, tablet computers, personal computers, televisions, set top boxes and other devices.
The step of selecting a video segment for future display may comprise selecting a plurality of segments for future display. Similarly, the step of determining a reaction of the user may comprise determining a single reaction or may comprise determining a plurality of reactions which may or may not be interrelated.
In some examples of the invention, the selected video segment may be displayed to the user, and the steps of determining a user reaction, inferring user interests and selecting a video segment for future display may be repeated. In this manner, the method may allow for repeated display of selected video segments and consequent repeated refining of inferred user interests. According to some examples, the method may further comprise classifying the user according to the inferred user interests. In such examples, selecting a video segment for future display may comprise selecting a video segment targeted to the user classification. In such examples, a range of video segments suitable for different user classifications may be prepared, from which selection of segments for future display may be made.
According to some examples, the method may further comprise generating a user profile based on the inferred user interests. An individual user profile may be developed for each user, which profile may indicate their interests and allow selection of appropriate video segments for future display. In some examples, user classification and individual or group user profiles may be used together in the selection of video segments for future display.
According to some examples, the video segment may comprise an advertisement.
According to some examples, determining a reaction of the user to the video segment may comprise registering user reaction data during display of the video segment, associating the user reaction data with corresponding video content from the video segment, and reasoning user reaction from the associated data and content. In some examples, the video content may be described with metadata, and associating user reaction data with corresponding video content may comprise correlating user reaction data with the video content metadata.
According to some examples, determining a reaction of the user to the video segment may comprise registering user interaction with the user equipment. Examples of user interaction with the user eq uipment may comprise user eye focus on the user equipment, changes in user separation from the user equipment, screen touches or button presses.
According to some examples, registering user interaction with the user equipment may comprise identifying whether or not the user is watching the video segment on the user equipment. According to some examples, registering user interaction with the user equipment may comprise tracking what content within the video segment is receiving the user's attention. Such tracking may be accomplished using for example eye tracking equipment.
According to some examples, registering user interaction with the user equipment may comprise registering user movement relative to the user equipment. User movement relative to user equipment may for example include increasing or decreasing separation from the user equ ipment or touching a screen or button of the user equipment.
According to some examples, determining a reaction of the user to the video segment may comprise registering physiological responses of the user during display of the video segment. An example of a physiological response may include pupil response of the user.
According to some examples, selecting a video segment for future display may comprise selecting a video segment whose content relates to the inferred interest. A selected video segment may be closely or remotely related to the inferred interest, or may for example concern a subset of the inferred interest.
According to another aspect of the present invention, there is provided a computer program product configured, when ru n on a computer, to implement a method according to the first aspect of the present invention. The computer program product may be stored on a computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal, or it could be in any other form. In some examples, the computer program product may be incorporated in core network processors or in user equipment devices. In other examples, the computer program product may comprise several sub programs, some of which may be incorporated within a core network processor and others of which may be incorporated within user equipment. Some or all of the computer program product may be made available via download from the internet.
According to another aspect of the present invention, there is provided a system for targeting delivery of video media to a user, the system comprising a display unit configured to display a video segment to a user, a determining unit configured to determine a reaction of the user to the video segment, an inferring unit configured to infer user interests from the user's reaction to the video segment, and a selection unit configured to select a video segment for future display according to the inferred user interests.
In some examples, the system may be realised within a user equipment or may be realised between a user equipment and one or more network processors such as a core network processor. Units of the system may be functional units which may be realised in any combination of hardware ad/or software.
According to some examples, the determining unit of the system may comprise a registering unit configured to register user reaction data during display of the video segment, an associating unit, configu red to associate user reaction data with corresponding video content from the video segment, and a reasoning unit, configured to reason user reaction from the associated data and content. The associating unit may be configured to associate user reaction data with corresponding video content from the video segment by synchronising time stamped metadata describing the video content with corresponding user reaction data. The registering unit may be configured to register user reaction data of different kinds, including for example data concerning user interaction with a user equipment, or user physiological response data. The registering unit may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equipment and may supply the interaction or response data.
According to some examples, at least part of the system may be realised within a network apparatus. For example, a display unit and determining unit of the system may be realised within a user equipment, with an inferring unit and selection unit realised within a network apparatus, such as for example a core network node.
According to another aspect of the present invention , there is provided a user apparatus comprising a display unit configured to display a video segment to a user and a determining unit configured to determine a reaction of the user to the video segment, the apparatus further comprising a transmitting unit configured to transmit the determined reaction to a network apparatus. According to another aspect of the present invention, there is provided a network apparatus comprising an inferring unit configured to infer user interests from a user reaction received from a user apparatus, and a selection unit configured to select a video segment for future display according to the inferred user interests.
Brief description of the drawings
For a better understanding of the present invention, and to show more clearly how it may be carried into effect, reference will now be made, by way of example only, to the following drawings in which:
Figure 1 is a flow chart illustrating steps in a method for targeting delivery of video media to a user.
Figure 2 is a block diagram illustrating functional units in a system for targeting delivery of video media to a user.
Figure 3 is a flow chart illustrating steps in another example of a method for targeting delivery of video media to a user.
Figure 4 is a block diagram illustrating functional units in another example of a system for targeting delivery of video media to a user. Figure 5 is a block diagram illustrating functional units in another example of a system for targeting delivery of video media to a user, the system being distributed between user equipment and a core network.
Detailed Description
Figure 1 illustrates steps in a method 100 for targeting delivery of video media to a user. In a first step 1 10, the method 100 comprises displaying a video segment to a user via a user equipment. The method then proceeds, at step 120 to determine a reaction of the user to the video segment. At step 130, the method infers user interests from the user's reaction to the video segment. Finally, at step 140, the method selects a video segment for future display to the user according to the inferred user interests. As discussed above, examples of the method 100 enable selection of video segments for future display to a user based on monitored user reaction to previously displayed video segments. The targeting of delivery of video media is thus based on highly relevant data reflecting the user's response to previously displayed video segments. It will be appreciated that the method is thus highly responsive to evolving user interests, as well as conducive to the construction of a detailed user profile or classification system. Each video segment displayed to the user provides information that may inform future video selections, allowing new user interests to be discovered and established interests to be explored in detail.
The user equipment via which the video segment is displayed to the user may be any type of user equipment suitable for such display. Examples of suitable user equipment may include for example a mobile phone, tablet computer, personal computer, television or television and set top box combination. In one example a video segment may be displayed via a streaming protocol to a user accessing the streamed media via a mobile phone or tablet computer. The streamed media may be made available via the internet which may be accessed by the user via a distributed telecommunications network. In other examples, the video segment may be displayed via a television to a user in a home or work situation.
According to examples of the invention, a video segment comprises a discrete unit of video content. The length of a segment may vary from for example a few seconds to several minutes to an hour or more. Examples of video segments include advertising commercials, extracts or clips from films or television programs etc. The video segments may comprise product or service advertising, entertainment media, factual media, news, current affairs or any other type of video media.
Referring again to Figure 1 , following display of the video segment to the user in step 1 10, the method proceeds at step 120 to determine a reaction of the user to the video segment. User reaction may be determined in a variety of ways, examples of which are discussed below.
I n one example, determining a user reaction to the video segment at step 120 comprises (i) registering user reaction data during display of the video segment, (ii) associating registered user reaction data with corresponding video content, and (iii) reasoning user reaction from the corresponding reaction data and video content.
User reaction data encompasses a wide range of information relating to user reaction, examples of which are discussed below.
A first example of user reaction data is data concerning user interaction with the user equipment. A user may interact with the user equipment on which a video segment is displayed in many different ways, all of which may provide insight into the user's response to the video content displayed on the user equipment.
A first example of interaction data may comprise presence or absence in front of the screen displaying the video content. Various techniques such as face detection or presence detection may establish whether a user is present in front of the screen and therefore likely to be observing the content. This may be equally applicable for a mobile device such as a phone or tablet as for a stationary device such as a television. In one example, a user of a mobile device may hold the device in front of their face in order to watch a video segment displayed on the screen. Face detection equipment, comprising a camera or other suitable sensor and appropriate software detects the user face within an appropriate capture range of the screen. If the user places the mobile device on a surface or holds the device away from the face, the face detection equipment registers that the user face is no longer present within the capture range of the screen, and hence the user is probably not watching the displayed video segment. The camera or other sensor may be incorporated into the mobile device or may be remote from but in communication with the mobile device. In another example, a user of a television may be watching a video segment from a seated position. Face or presence detection equipment in communication with the television registers the user's presence in front of the television, suggesting that the user is watching the video segment. If the user should choose to get up and leave the area of the television, the absence of the user is registered by the face or presence detection equipment, indicating that the user has elected not to watch the video segment being displayed.
Another example of interaction data may comprise distance between a user and their user equipment, and particularly changes in this distance caused by user movement or user equipment movement. Suitable sensors may measure user separation from a user equipment, or from a display screen of the user equipment, and hence may register changes in this distance reflecting a user approaching or moving away from the equipment. In the case of a mobile device the distance changes may reflect the user moving the device closer or further away from a face or eyes of the user. A further example of interaction data may comprise user engagement with interactive elements on the user equipment. This may for example include screen touches for a touch screen controlled device, or button presses for other forms of control.
A still further example of interaction data may comprise user eye focus. Eye tracking equipment enables the registering of user eye focus, to discern what a user is looking at. In the present examples, eye tracking equipment may be used to determine what part of a display screen a user is looking at at any given time.
Another example of user reaction data comprises user physiological responses such as pupil response. Pupil response, including dilation and contraction of the pupils of a user, can be measured by appropriate sensing equipment. User reaction can be inferred by assessing such physiological response data, as discussed in further detail below. It will be appreciated that the above examples of user reaction data are not exhaustive, but merely representative of the different types of user reaction data which may registered.
Having registered user reaction data of one or more different types, examples of the method proceed to associate the registered user reaction data with the video content to which they correspond, that is the video content which provoked the registered reaction. The video content may be described by meta data, which may include object description and location as well as time stamp information. This meta data may by synchronised with the registered user reaction data to allow association of reaction data with video content. For example, in the case of presence detection, user absence from in front the user equipment may be associated with the video content that was displayed in the seconds or minutes immediately preceding the user's absence, or displayed during the user's absence. In the case of eye tracking data, user eye focus on a particular area of the screen may be associated with the content that was displayed in that part of the screen at the time the eye focus was registered. User distance and physiological response may similarly be associated with the video content that was being displayed at the dime the particular reaction data was registered.
Finally, the method proceeds to reason user reaction from the associated reaction data and video content. The nature and detail of user reaction which may be reasoned is dependent upon the nature and quantity of the reaction data available. Considering the examples above, in the case of presence/absence data, a substantially binary interested/not interested reaction can be reasoned from a user's decision to remain or become absent from in front of a screen during or following display of certain content. A scenario where such reasoning may be implied could include a home television situation at the moment of a commercial break. A user who chooses to move away from the television at the start of a commercial break may be reasoned to be uninterested in the subject of the first advertisement displayed during the commercial break.
Data concerning user separation from a display screen may provide further insight into user reaction, with a choice to approach a display screen during display of certain video content indicating an increased interest in the content displayed at the time of approach. This may be equally applicable for a mobile device user who brings the device closer to the face during display of certain content, and for a stationary device user who chooses to approach for example the television during display of certain content. It may be reasoned that the user desired to see certain content more clearly, indicating interest in that content. Eye focus on particular content may also be reasoned as an expression of interest in that particular content, while the nature of a user's interest may be reasoned from for example pupil response data. Pupil contraction may indicate distaste or negative interest, while pupil dilation may indicate pleasure or favourable interest. It will be appreciated that a more complete and nuanced picture of user reaction may be achieved by combining information from several different registered reaction data, as well as monitoring the evolution of reaction data over time. The following example scenarios illustrate how a picture of user reaction to video content may be developed from combined reaction data. In a first example scenario, user eye focus and pupil response data are registered, indicating that user focus was generalised over the screen up to time t1 , at which time user eye focus became concentrated at the top right of the screen and a pupil dilation was recorded. From this data it may be reasoned that the content displayed at the top right of the screen at time t1 prompted a reaction of favourable interest from the user. Intense eye focus combined with pupil contraction would have been reasoned as negative interest in the relevant content. In a second example scenario, user presence and separation are registered, indicating that a user approached a display screen at time t1 , returned to their original position at time t2 and became absent from the capture range of the sensing equipment at time t3. From this data it may be reasoned that the user experienced particular interest in the content displayed between times t1 and t2 but was uninterested in the content displayed immediately preceding or at ti me t3. It will be appreciated that a range of other scenarios may be envisaged, involving different combinations of reaction data. Algorithms enabling the interpretation of reaction data to reason user reaction may be developed according to particular operational requirements. Examples of suitable algorithms that may enable interpretation of different types of reaction data are available in the art.
It will be appreciated from the above that the step 120 of determining a reaction of the user to the video segment may comprise determining a range of user reactions to different content within the video segment. A single reaction such as interested/not interested may be registered, or a plurality of partial reactions indicating positive or negative interest in different parts of the video content displaced may be determined.
Referring again to Figure 1 and following determining of a user reaction to the video segment, the method proceeds at step 140 to infer user interests from the user's reaction to the video segment. The degree to which user interests may be inferred is dependent upon the reactions that have been determined at step 130. For example, a simple "not interested" reaction to a video segment indicates that a user is uninterested in subject matter corresponding to the content of the video segment. Thus lack of interest in a video segment concerning dogs enables inference that a user is not interested in dogs. Alternatively, should a user have shown a reaction of positive interest in a Labrador during display of a video segment in which dogs were not the main subject, a user interest in dogs may be inferred. Positive interest in a Labrador included as part of a video segment displaying several dogs of different breeds enables inference of a user interest in dogs and a particular interest in the Labrador breed. It will be appreciated that further detail of user interests may be inferred by considering determined user reactions to a video segment in the context of determined reactions to previous video segments. Thus an inferred general interest in dogs may be refined by consideration of later reactions of positive interest or lack of interest in particular breeds of dog or aspects of dog ownership including nutrition, training, competing etc. This evolving picture of user interests may be assembled as part of a user profile and/or may be used to classify users into different categories. The use of user classification and profiles is discussed in further detail below with reference to Figures 3 and 4. Referring again to Figure 1 , in a final step 140, the method 1 00 selects a video segment for future display according to the inferred user interests. Selecting a video segment according to the inferred user interests may comprise selecting a video clip that appeals to the inferred user interests or that serves to provide further information about the user interests, thus the selection may be made according to the user interests for a substantially commercial purpose or for an information gathering purpose. A commercial purpose may include selecting targeted advertising material that matches the user's inferred interests. Such material may be selected for future delivery having identified that the user's inferred interests make them likely to be responsive to the advertisement. Another example of commercial purpose may include selecting a video segment likely to appeal to the user's interests, and thus demonstrating understanding of user requirements and providing improved user experience.
Selecting a video segment for information gathering may for example comprise selecting a video segment that is related to an inferred interest, thus exploring possible additional related interests the user may have. Alternatively, a video segment may be selected that is totally unrelated to inferred interests in order to test or identify possible new areas of interest. According to another example, a video segment may be selected in order to gain additional insight into an established interest, possibly relating to sub categories of that interest. For example, having established that a user has an interest in dogs, video segments relating to dog ownership, dog training, working dogs, dog breeding etc may be selected, thus allowing user reaction to these video segments to be determined and more detailed information about the user's interest to be inferred. It will be appreciated that interests inferred from reactions to video segments displaying content of one type may inform selection of video segments displaying content of a different type or of the same type. For example interest inferred through reaction to an advertisement may prompt selection of future advertisements or may inform selection of different video types such as movie clips etc. Alternatively interests inferred from reactions to a movie clip may inform selection of appropriate adverts for display to the user. Other variations according to subject matter may be envisaged.
The method 100 of Figure 1 may be realised by a computer program which may cause a system, processor or apparatus to execute the steps of the method 100. Figure 2 illustrates functional units of a system 200 which may execute the steps of the method 1 00 , for example according to computer readable instructions received from a computer program. The system 200 may for example be realised in one or more processors, system nodes or any other suitable apparatus.
With reference to Figure 2, the system 200 comprises a display unit 210, a determining unit 220, an inferring unit 230 and a selection unit 240. It will be understood that the units of the system are functional units, and may be realised in any appropriate combination of hardware and/or software.
According to an example of the invention, the display unit 210, determining unit 220, inferring unit 230 and selection unit 240 may be configured to carry out the steps of the method 100 substantially as described above. The display unit 210 may be configured to display a video segment to a user and the determining 220 unit may be configured to determine a reaction of the user to the displayed video segment. The inferring unit 230 may be configured to infer user interests from the reactions determined by the determining unit 220, and the section unit 240 may be configured to select a video segment for future display according to the inferred user interests.
The display unit 210 may comprise or communicate with a display screen or other display device suitable for displaying a video segment to a user. The determining unit 220 may be configured to register user reaction data of different kinds, including for example data concerning user interaction with a user equipment, or user physiological response data. The determining unit 220 may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equ ipment and may su pply the interaction or response data. The determining unit may also be configured to record or receive meta data describing video content displayed to the user and to synchronise the meta data with the user reaction data in order to associate user reaction with displayed video content. The inferring unit 230 may be configured to apply machine learning or other algorithms to infer user interests from the determined user reactions. The selection unit 240 may be configured to select a video segment for future display according to a range of criteria concerning the inferred user interests, including for example appealing to those interests or discovering more information concerning established or possible new interests. The precise criteria may be determined by a system operator.
As discussed above, examples of methods according to the present invention may incorporate use of a user profile or user classification system. According to an example, a user profile stores information inferred about user interests and may be continually updated with new insight gathered through analysis of user reactions to new video content. A user classification system may be based upon user interest types or trends and may for example be enhanced by customer relations management data concerning the user, including for example user attributes and user network data. Selection of a video segment for future display according to inferred user interests may involve consu lting a user profile and/or user classification . For video segment selections aimed at information gathering, selections may for example be directed towards narrowing down a user's interests, checking for evolution in user interests over time or seeking to identify new user interests. In some examples, periodic calibration and/or updating of user profiles and/or classification may be conducted by selecting test segments to identify evolution in user interests over time. Specific profiles and or information types may be targeted in order to more fully populate a user profile or classify a particular user. This information may then serve to enable increasingly accurate selection of video segments for commercial purposes, either in the delivery of advertising material or the suggestion of entertainment or information material.
Figure 3 illustrates another example of a method for targeting delivery of video media to a user. The method of Figure 3 illustrates how the steps of the method 100 may be further subdivided in order to realise the functionality described above. Figure 3 also illustrates an example of an additional step which may be incorporated into the method 100 according to different examples of the invention.
With reference to Figure 3, in a first step 1 10 a video segment is displayed to a user via a user equipment. In subsequent steps 120a, 120b and 120c user reaction data is registered durin g d i s pl ay of th e vi d eo seg m ent a n d i s associated with the corresponding video content using time stamped meta data, user reaction to the video content is then reasoned from the registered reaction data and the associated video content. These steps are conducted substantially as described above. Once a user reaction to the video segment has been determined, user interests are inferred from the determined reactions at step 130. A user profile is then updated according to the user interests at step 1 35. As discussed above, this updating may also comprise classifying a user into a particular user classification, category or personality type. Finally a new video segment for future display to the user is selected according to the inferred user interests as reflected in the user profile and/or classification . As discussed above, the segment for future display may be selected according to commercial or information gathering requirements. The new video segment may for example be selected from a prepared group of video segments considered to be appropriate for a particular user classification or personality type. The selected video segment is then displayed to the user, the method returning to step 1 10 to follow the same process for the new video segment. Display of the new video segment may take place immediately following display of the first video segment, allowing realtime reaction to inferred user interests, and rapid convergence upon a particular user classification or rapid completion of at least part of a user profile. Alternatively, the selected segment may be displayed to the user at some future date or time. In one example, selected video segments in the form of advertisements may be continually displayed for the duration of a commercial break in a television program or movie. As the end of the commercial break is reached, the next selected video segment is held over for display at the start of the next commercial break. As for the method 1 00 illustrated in Figure 1 , the method 1 00 of Figure 3 may be realised by a computer program which may cause a system, processor or apparatus to execute the steps of the method 100. Figure 4 illustrates how the system 200 of Figure 2 may be modified to provide the additional functionality discussed with reference to Figure 3. The system 200 may for example be realised in one or more processors, system nodes or any other suitable apparatus.
With reference to Figure 4, the determining unit 220 of the system 200 additionally comprises a registering unit 220a, an associating unit 220b and a reasoning unit 220c. It will be understood that the units are functional units, and may be realised in any appropriate combination of hardware and/or software. According to an example of the invention, the registering unit 220a, associating unit 220b and reasoning unit 220c may be configured to provide the functionality discussed above with reference to steps 120a, 120b and 120c of the method 100. The registering unit 220a may be configured to register user reaction data of different kinds, including for exam ple data concern ing user interaction with a user eq ui pment, or user physiological response data. The registering unit may for example comprise or be configured to receive data from various sensors which may be integrated into or in communication with a user equipment and may supply the interaction or response data. Examples of suitable sensors may include a camera, distance or movement sensor.
Examples of the system may be realised within a user equipment. Alternatively, in some examples, the system may be distributed between a user equipment and a network processor such as a core network processor. This may allow certain of the reasoning, inferring and selection steps to be conducted at a network level, requiring less functionality to be incorporated at a user equipment level. Figure 5 illustrates one example of how a system 300 for targeting delivery of video media may be distributed between a core network 500 and a plurality of different user equipment devices 400. According to the illustrated example, each user equipment device comprises a display unit 310 and a determining unit 320. The core network comprises an inferring unit 330 and a selection unit 340. The display units 310 and determining units 320 at the user equipment display video segments received from the core network and determine user reaction to those video segments. The determined user reactions are then sent to the core network, where individual user interests are inferred by the inferring unit 330, user profiles may be stored and updated, and selection of video segments for future display to the users is made by the selection unit 340. The selected video segments may then be transmitted to the user equipment devices 400 for future display.
In some examples the system 300 may be implemented by dedicated user and network apparatus. The user apparatus may comprise display and determining units which may be incorporated into a user equipment. The display unit may comprise a display screen or other display apparatus, or may be in communication with a display screen of the user apparatus. The network apparatus may comprise an inferring unit and a selection unit. In other examples the determining unit may be divided between the user apparatus and the network apparatus, with for example, registering of reaction data and association with corresponding video content taking place ion the user apparatus. This data may then be sent to the network apparatus for reasoning of user reactions as well as the subsequent inferring of user interests and selection of video segment or segments for future display. Examples of how a method and system of the present invention may be applied to different user situations are described below.
In a first example a user may be watching television programs via a television that is in communication with a communications network. The television is equipped with suitable sensing devices such as a camera, distance sensor etc. During a commercial brea k i n th e television programming, user reactions to an advertisement are determined and sent to a core network processor. The network processor infers user interests from the user reactions, and saves those interests in a user profile. The network processor then selects a video segment for subsequent display to the user according to those user interests. During the first few commercial breaks, the selection is made with the aim of gathering as complete a picture as possible of the user interests, for example testing different areas of potential user interest and/or gathering more detailed information about a particular area of interest. The selected video segment in the form of a new advertisement is then sent to the television for display to the user. User reactions are again determined and sent to the core network for analysis and the process continues. When the television programming is scheduled to restart at the end of the commercial break, the next selected advertisement is held over for display at the start of the next commercial break. Within a short number of commercial breaks, the core network has gathered sufficient information on user interests to propose advertisements corresponding to user interests and hence to which the user may be most susceptible. The focus of video selection may periodically revert to information gathering in order to ensure that evolution in user interests is adequately captured. In another example, a user may be watching streamed video content on a mobile device. The device is equipped with suitable sensing devices such as a camera, distance sensor etc. During a first streamed video segment, the user's reaction to the segment is determined at the mobile device and sent to the core network via which the streamed video media is accessed. As in the first example, the core network infers user interests from the user reactions, and saves those interests in a user profile. The network processor then selects a video segment for subsequent display to the user according to those user interests. The selected video segment may subsequently be displayed to the user or suggested to the user, allowing the user to elect whether or not to watch the segment. Selection of the new video segment may be directed towards gathering further information as to the user interests or may be intended merely to reflect those interests, providing an enhanced user experience. User reaction during display of the selected video segment is determined and the process continues, gathering an increasingly complete and detailed picture of user interests, and so being able to select with increasing accuracy and success those video segments which will be interesting to the user.
It should be noted that the above-mentioned examples illustrate rather than limit the invention , and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim, "a" or "an" does not exclude a plurality, and a single processor or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope.

Claims

1 . A method for targeting delivery of video media to a user, comprising:
displaying a video segment to a user via a user equipment (step 1 10);
determining a reaction of the user to the video segment (step 120);
inferring user interests from the user's reaction to the video segment (step 130); and
selecting a video segment for future display according to the inferred user interests (step 140).
2. A method as claimed in claim 1 , further comprising classifying the user according to the inferred user interests, and wherein selecting a video segment comprises selecting a video segment targeted to the user classification.
3. A method as claimed in claim 1 or 2, further comprising generating a user profile based on the inferred user interests (step 135).
4. A method as claimed in any one of the preceding claims, wherein the video segment comprises an advertisement.
5. A method as claimed in any one of the preceding claims, wherein determining a reaction of the user to the video segment (step 120) comprises:
registering user reaction data during display of the video segment (step 120a); associating the user reaction data with corresponding video content from the video segment (step 120b); and
reasoning user reaction from the associated data and content (step 120c).
6. A method as claimed in any one of the preceding claims, wherein determining a reaction of the user to the video segment (step120) comprises registering user interaction with the user equipment.
7. A method as claimed in claim 6, wherein registering user interaction with the user equipment comprises identifying whether or not the user is watching the video segment on the user equipment.
8. A method as claimed in claim 6 or 7, wherein registering user interaction with the user equipment comprises tracking what content within the video segment is receiving the user's attention.
9. A method as claimed in any one of claims 6 to 8, wherein registering user interaction with the user equipment comprises registering user movement relative to the user equipment.
10. A method as claimed in any one of the preceding claims, wherein determining a reaction of the user to the video segment (step 120) comprises registering
physiological responses of the user during display of the video segment.
1 1 . A method as claimed in any one of the preceding claims wherein selecting a video segment for future display comprises selecting a video segment whose content relates to the inferred interest.
12. A computer program product configured, when run on a computer, to implement a method as claimed in any one of the preceding claims.
13. A system (200) for targeting delivery of video media to a user, the system comprising:
a display unit (210) configured to display a video segment to a user;
a determining unit (220) configured to determine a reaction of the user to the video segment;
an inferring unit (230) configured to infer user interests from the user's reaction to the video segment; and
a selection unit (240) configured to select a video segment for future display according to the inferred user interests.
14 A system (200) as claimed in claim 13, wherein the determining unit (220) comprises
a registering unit (220a) configured to register user reaction data during display of the video segment,
an associating unit (220b), configured to associate user reaction data with corresponding video content from the video segment, and a reasoning unit (220c), configured to reason user reaction from the associated data and content.
15. A system (300) as claimed in claim 13 or 14, wherein at least part of the system is realised within a network apparatus.
PCT/EP2013/050416 2013-01-10 2013-01-10 Method and system for targeting delivery of video media to a user WO2014108195A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/050416 WO2014108195A1 (en) 2013-01-10 2013-01-10 Method and system for targeting delivery of video media to a user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/050416 WO2014108195A1 (en) 2013-01-10 2013-01-10 Method and system for targeting delivery of video media to a user

Publications (1)

Publication Number Publication Date
WO2014108195A1 true WO2014108195A1 (en) 2014-07-17

Family

ID=47594680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/050416 WO2014108195A1 (en) 2013-01-10 2013-01-10 Method and system for targeting delivery of video media to a user

Country Status (1)

Country Link
WO (1) WO2014108195A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11076207B2 (en) 2018-11-02 2021-07-27 International Business Machines Corporation System and method for adaptive video
US11256857B2 (en) * 2020-02-27 2022-02-22 Fujifilm Business Innovation Corp. Apparatus and non-transitory computer readable medium for proposal creation corresponding to a target person

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002080546A1 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for automatically selecting an alternate item based on user behavior
WO2003043336A1 (en) * 2001-11-13 2003-05-22 Koninklijke Philips Electronics N.V. Affective television monitoring and control
EP2464138A1 (en) * 2010-12-09 2012-06-13 Samsung Electronics Co., Ltd. Multimedia system and method of recommending multimedia content
CA2775814A1 (en) * 2012-05-04 2012-07-10 Microsoft Corporation Advertisement presentation based on a current media reaction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002080546A1 (en) * 2001-03-28 2002-10-10 Koninklijke Philips Electronics N.V. Method and apparatus for automatically selecting an alternate item based on user behavior
WO2003043336A1 (en) * 2001-11-13 2003-05-22 Koninklijke Philips Electronics N.V. Affective television monitoring and control
EP2464138A1 (en) * 2010-12-09 2012-06-13 Samsung Electronics Co., Ltd. Multimedia system and method of recommending multimedia content
CA2775814A1 (en) * 2012-05-04 2012-07-10 Microsoft Corporation Advertisement presentation based on a current media reaction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11076207B2 (en) 2018-11-02 2021-07-27 International Business Machines Corporation System and method for adaptive video
US11256857B2 (en) * 2020-02-27 2022-02-22 Fujifilm Business Innovation Corp. Apparatus and non-transitory computer readable medium for proposal creation corresponding to a target person

Similar Documents

Publication Publication Date Title
US11910057B2 (en) Environment object recognition
US20170289619A1 (en) Method for positioning video, terminal apparatus and cloud server
US20150378587A1 (en) Various Systems and Methods for Expressing An opinion
US11423435B2 (en) Advertisement feedback and customization
US20150281756A1 (en) Data session management method and system including content recognition of broadcast data and remote device feedback
CN105868685A (en) Advertisement recommendation method and device based on face recognition
US10841651B1 (en) Systems and methods for determining television consumption behavior
JP7197930B2 (en) Methods and systems for providing location-based personalized content
US20200327821A1 (en) Method and System For Establishing User Preference Patterns Through Machine Learning
CN113261026B (en) System for targeted display of content
CN111158546A (en) Media information display method and device, storage medium and electronic device
CN104956680A (en) Intelligent prefetching of recommended-media content
US10425687B1 (en) Systems and methods for determining television consumption behavior
EP3226195A1 (en) System and method for hybrid saas video editing
US11812105B2 (en) System and method for collecting data to assess effectiveness of displayed content
JP2016102968A (en) Advertisement management device and information communication terminal application software
WO2014108195A1 (en) Method and system for targeting delivery of video media to a user
US9930424B2 (en) Proxy channels for viewing audiences
WO2017051252A2 (en) Apparatus, system and method for advertisement architecture for mobile applications and browsers
WO2018103977A1 (en) Delivery of media content
CN109309875B (en) Method for displaying user behavior characteristic model on smart television
KR101446463B1 (en) Brief news providing server and method for providing news thereof
US20240037605A1 (en) System and method for using device discovery to provide marketing recommendations
EP3151172A1 (en) System and method for effective monetization of product marketing in software applications via audio monitoring
GB2557314A (en) Media streaming system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13700663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13700663

Country of ref document: EP

Kind code of ref document: A1