CN111182333B - Data processing method, system and storage medium - Google Patents

Data processing method, system and storage medium Download PDF

Info

Publication number
CN111182333B
CN111182333B CN202010008743.9A CN202010008743A CN111182333B CN 111182333 B CN111182333 B CN 111182333B CN 202010008743 A CN202010008743 A CN 202010008743A CN 111182333 B CN111182333 B CN 111182333B
Authority
CN
China
Prior art keywords
client device
given client
media file
party content
endpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010008743.9A
Other languages
Chinese (zh)
Other versions
CN111182333A (en
Inventor
A.费恩
V.V.塔库尔
J.希克斯
J.D.麦克沃伊
O.马利克
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN111182333A publication Critical patent/CN111182333A/en
Application granted granted Critical
Publication of CN111182333B publication Critical patent/CN111182333B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26283Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for associating distribution time parameters to content, e.g. to generate electronic program guide data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2387Stream processing in response to a playback request from an end-user, e.g. for trick-play
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25825Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Graphics (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

This application includes methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for providing action calls at the conclusion of a resource. In one aspect, a method includes receiving a request for media content to be presented on a user device, the request including a second request for third-party content to be presented with the media content, identifying user device capabilities that describe information regarding system compatibility of the user device, determining third-party content to be presented with the media, the third-party content including a first presentation duration indicating a length of time to present the third-party content, determining that an end endpoint is compatible with the user device and related to the third-party content, the end endpoint providing an interaction opportunity to request a subsequent resource for the user device, and sending data to present the third-party content, the end endpoint, and the media.

Description

Data processing method, system and storage medium
The application is a divisional application of an invention patent application with application date of 2017, 02/2017, application number of 201780015352.6 and invention name of 'data processing method and system'.
Background
This description relates to data processing.
Different devices have different capabilities. For example, mobile devices (e.g., smartphones) are typically capable of initiating telephone calls, while other types of devices may not be capable of initiating telephone calls. Some content distributed to multiple different types of devices includes embedded functionality (e.g., in end caps) that can cause the devices to initiate actions.
Disclosure of Invention
In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions performed by a data processing apparatus, including: receiving, at a video distribution system, a request to present media content on a user device, the request including a second request for third-party content to be presented with the media content; identifying, by the video distribution system, user equipment capabilities of the user equipment describing information about system compatibility of the user equipment from data transmitted with the request; determining third-party content to present with the media based on the second request, the third-party content including a first presentation duration indicating a length of time the third-party content is presented; determining that the end point is compatible with the user device and related to third-party content, the end point providing the user device with an interaction opportunity to request subsequent resources; and transmitting data to the user device to present the third party content, the end endpoint, and the media, wherein the end endpoint data is appended at an end of the third party content data such that the end endpoint is presented after the third party content, and wherein the end endpoint extends a first presentation duration of the third party content to a second presentation duration that is an accumulated time of presentation of the third party content and the end endpoint.
Other embodiments of this aspect include corresponding systems, apparatus, and computer programs configured to perform the actions of the methods encoded on computer storage devices.
Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages. Systems and methods provide cross-platform end points to users and third-party content providers for attaching to third-party content regardless of the type of device to which the third-party content is provided. The end-point is provided according to user device capabilities without requiring the third-party content to include burn-ins that are incompatible with all devices and operating systems. By providing different interaction models for different user devices, the user devices do not encounter corrupted instances of video (e.g., situations where the video freezes and the user cannot take action) caused when incompatible data or data not supported by the user devices is provided to the user devices. The end point includes at least one action to action element that provides an opportunity to request a subsequent resource. Since the end point is provided based on user device compatibility, the action call also provides subsequent resources that are compatible with the user device. The user-compatible ending endpoint also provides seamless transitions between third-party content, the ending endpoint, and the presented media. The end endpoint is selectively appended to the third-party content based at least in part on the capabilities of the user device to which the third-party content is provided to ensure that action invocations provided by the end endpoint can be performed by the user device to prevent failure or error of the user device. The techniques disclosed in this document enable a still image to be presented in a video playback application for a specified duration, for example, by incorporating a script that generates a ping to be generated during video playback within a specified time period. This causes the video playback application to continue to present the still image and advances the visual playback indicator of the video playback application so that the ending endpoint appears to be part of the video presentation.
Another innovative aspect of the subject matter described in this specification can be embodied in a method performed by one or more data processing apparatus, the method comprising: selecting, by the one or more data processing apparatus, a media file to be presented at a given client device, the media file comprising a first presentation duration indicating a length of time the media file was presented; identifying capabilities of the given client device; determining, by the one or more data processing apparatus, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated play delay (ping) that causes a visual play indicator to be played beyond the first presentation duration; and transmitting, by the one or more data processing apparatus, data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at the end of the media file such that the end endpoint is separate from the media file and is presented after the media file.
Another innovative aspect of the subject matter described in this specification can be embodied in a system that includes: one or more data processing devices; and software stored in a non-transitory computer readable storage medium that stores instructions executable by a data processing apparatus and that upon such execution cause the data processing apparatus to perform operations comprising: selecting a media file to be presented at a given client device, the media file including a first presentation duration indicating a length of time the media file is presented; identifying capabilities of the given client device; determining, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated play delay (ping) that causes a visual play indicator to be played beyond the first presentation duration; and sending data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at the end of the media file such that the end endpoint is separate from the media file and presented after the media file.
Another innovative aspect of the subject matter described in this specification can be embodied in a computer storage medium encoded with a computer program, the program including instructions that when executed by data processing apparatus cause the data processing apparatus to perform operations including: selecting a media file to be presented at a given client device, the media file including a first presentation duration indicating a length of time the media file is presented; identifying capabilities of the given client device; determining, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated play delay (ping) that causes a visual play indicator to be played beyond the first presentation duration; and sending data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at the end of the media file such that the end endpoint is separate from the media file and presented after the media file.
The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
Drawings
FIG. 1 is a block diagram illustrating an example environment for content distribution.
FIG. 2 is a block diagram illustrating a media content player with an extended duration indicator.
FIG. 3 is a block diagram illustrating a media content player displaying an ending endpoint with an action call.
FIG. 4 is a flow chart describing an example process for providing third-party content with an ending endpoint for presentation on a user device.
Like reference numbers and designations in the various drawings indicate like elements.
Detailed Description
The apparatus, systems, and methods described in this document enable third-party content providers to incorporate an end endpoint into various instances of third-party content in a platform-independent manner. As used in this document, a phrase termination endpoint refers to a portion of content beyond what is presented for a given media file. Graphics, images, or video presented after playing an audio/video file are examples of ending endpoints.
In some implementations, the end endpoint includes an action call that enables a user to take one or more specified actions by interacting with the end endpoint. For example, the ending endpoint may include an active link that initiates various actions in response to user interaction with the active link. For example, one of the active links may initiate a request for a specified web page, while another active link may initiate a telephone call to a specified telephone number.
The video distribution system may determine whether to provide an ending endpoint with a given instance of third-party content based at least in part on the user device capabilities. For example, an end-point that includes an action invocation to initiate a telephone call may be provided only if the user device that is to receive the third-party content is capable of initiating a telephone call. Similarly, the end-points may be distributed in a format compatible with a particular operating system, user device, media player version, and the like. Thus, an end-point may be provided that is customized for each user device, having the same content (e.g., the same video), but in a different format. As discussed in more detail below, the end endpoint may be appended to the third-party content, and as shown on the user device, the duration of the third-party content may be extended by the presentation duration of the end endpoint. When the end of the third-party content is reached and the ending endpoint is being presented, the visual play indicator displaying the play progress of the third-party content may proceed based on the simulated play ping, giving the appearance that the ending endpoint is part of the third-party content.
FIG. 1 is a block diagram of an example environment 100 in which content is distributed to user devices. A computer network 102, such as a Local Area Network (LAN), Wide Area Network (WAN), the internet, or a combination thereof, connects the video distribution system 110 to the user devices 104. The video distribution system 110 accesses third-party content 112, end-points 116, and media 114.
The user device 104 is an electronic device capable of requesting and receiving resources via the network 102. Example user devices 104 include personal computers, mobile communication devices (e.g., smart phones and tablets), and other devices that can send and receive data via the network 102. The user device 104 typically includes a user application 124, such as a web browser or native application, to facilitate sending and receiving data via the network 102. The user applications 124 may enable a user to display and interact with text, images, video, music, and other information on web pages of web sites typically located on the world wide web or local area networks. For example, the user device may initiate a media request 108 requesting given media 118 from the content server 120. The media request 108 may be generated, for example, by a user directly entering a URL (Uniform Resource Locator) of the given media 118 into a browser, or by directing a user device to the given media 118 by an active link (e.g., a hypertext link) that generates the request 108 when activated (e.g., by user interaction with the active link). In response to receiving the media request 108, the content server 120 may provide the given media 118 to the user device 104 for presentation.
The given media 118 may include content (e.g., music, images, video, or other content) provided by the content server 120. In some implementations, the given media 118 may include a script (e.g., one or more lines of machine-readable instructions) that automatically (e.g., without human intervention) generates the electronic request 106 for third-party content (e.g., "third-party content request" in fig. 1) when the given media 118 reaches the user device 104. As used throughout this document, third party content refers to content (e.g., advertisements) that is presented with media 118 (e.g., video and/or audio), but is provided by an entity other than the publisher of media 118. The entity that provides the third party content is referred to as the third party content provider. Typically, third-party content is combined with media as a given media is presented (e.g., such that the third-party content presented with the media content can be dynamically changed on a per-request basis). In some implementations, the third-party content may be content that is not embedded in the media 118, but is selected and displayed only at the beginning, end, or at some point during the play of the media 118.
The user device 104 sends a third-party content request 106 to a third-party content distribution system (TPCDS) 110. In response to receiving the request, the TPCDS 110 identifies third party content to be presented with the media 118. The TPCDS 110 comprises one or more data processing devices that interact with the user device 104 and distribute third party content and/or end points presented with the media 118 at the user device 104.
In some implementations, the content server 120 may send a third party content request 106 to the TPCDS 110. For example, when the content server 120 receives the media request 108 from the user device 104, the content server 120 may send a request to the TPCDS requesting third party content. The content server 120 receives the requested third party content and may send the given media 118 and the received third party content to the user device 104.
The TPCDS 110 includes a third party content data storage device 112 and an end point data storage device 116. The third-party content data store 112 stores third-party content and/or various data related to the third-party content (e.g., distribution criteria, budget information, click-through rates, multiple presentations, and/or multiple conversions of various portions of the third-party content). In some implementations, the third-party content is an advertisement that is distributed based on the outcome of a bid and/or content selection process (e.g., auction).
In some implementations, the TPCDS 110 selects third party content based on the results of the auction to select content provided in response to each third party content request 106. The third-party content is ranked according to a score, which in some implementations is based on the value of the bid (and/or other ranking parameters) associated with the content. The TPCDS 110 also selects third-party content based on information included in the third-party content request 106, distribution criteria for the third-party content, content presentation goals for the publisher, content presentation goals for third-party content providers, information requirements for the user, and/or other content selection parameters.
The TPCDS 110 provides a cross-platform end-point for presentation with third-party content at various user devices 104. The end endpoint enables the third-party content provider to provide the user with an opportunity to take action after the third-party content is presented. In some embodiments, the opportunity to take action is provided in the form of an action invocation control. As used throughout this document, an action invocation control is a user interface element that performs a specified action (e.g., based on code that performs the action invocation control) in response to a user interaction (e.g., clicking, sliding, etc.) with the action invocation control. For example, the action invocation control may include script (or other code) that, in response to detecting user interaction with the action invocation control, causes the user device to initiate a telephone call, request a specified web page, open a specified native application installed on the user device, download a given application program to the user device, or similar action that indicates that the user desires further action.
In some implementations, the ending endpoint is selectively presented with various third-party content, such that the ending endpoint does not have to be appended to the third-party content provided in response to each third-party content request 106. But may instead determine whether to append an ending endpoint to the third-party content based on each request. The Third Party Content Distribution System (TPCDS)110 may determine whether to attach an end-point to third party content based on, for example, user device capabilities and/or user preferences.
The TPCDS 110 uses the metadata from the third party content request 106 to determine the ability of the user device to support various available end-points. In some implementations, the TPCDS 110 uses the metadata to identify the user device type, the operating system of the user device, the version of the operating system of the user device, the native applications and native application versions installed on the user device 104, the user device location, and/or other data indicative of user device compatibility. For example, the TPCDS 110 may identify the presence and version of a user device utilizing a given operating system, a version of the given operating system, and the media playback application 124. Thus, the TPCDS 110 may provide an end point that is compatible with the detected operating system version and an end point that will be played on a particular version of the media playback application 124.
In addition, the TPCDS 110 may provide an end endpoint with an action call that initiates actions supported by device capabilities based on user interaction. In some implementations, the TPCDS 110 may provide an end point based on whether the user device can place a phone call, install a mobile native application, run a desktop application, utilize a mobile version of a website, and other functionality indicating the capabilities of the user device. For example, when the user device 104 is unable to place a call, the TPCDS 110 will not provide an ending endpoint with an action call to place a call.
In some implementations, the user device 104 may have a broken experience (broken experience). The disruptive experience may be when the user is unable to participate in the action call on demand because the user device 104 does not have the capability to integrate with the ending endpoint. For example, the ending endpoint may have an action call that places a call when the user interacts with the action call. In this case, if the user device 104 is not able to place a call, the user device 104 may freeze because the user device 104 is attempting to execute scripts and/or code that the user device 104 is not able to execute. The TPCDS 110 provides an end point that is compatible with the user device 104, preventing the user device 104 from failing (e.g., freezing, resetting, locking, etc.) due to the user device presenting (or the user interacting with) an end point that the user device cannot utilize.
The TPCDS 110 may also provide an ending endpoint based not only on user device capabilities, but also on user preferences and past user behavior. In some implementations, the third-party content distribution system 110 can employ machine learning to determine user preferences. For example, machine learning techniques may be used to generate a user behavior model based on a user's response to previously presented third-party content and/or an ending endpoint. The model is then used to predict user responses to various end endpoints and, in part, to selectively deliver the end endpoints on a per-request basis.
For purposes of illustration, it is assumed that a given user typically interacts with a shorter duration end point, and that the given user tends to skip a longer duration end point (e.g., by clicking a skip button or turning off the media player). In this example, the model may predict that a given user is more likely to interact with an end endpoint having a presentation time of three to five seconds. Thus, the TPCDS 110 will provide a shorter ending endpoint because the system 110 learns (e.g., through machine learning or some other predictive process) that users typically respond to the shorter ending endpoint.
In some implementations, the TPCDS 110 can also learn the type, topic, and/or theme of content presented in third party content and/or end points with which a given user has historically interacted during end point presentation. In this case, the TPCDS 110 can learn user preferences for particular topics or topics. Thus, the TPCDS 110 may select which end endpoints to provide based on learned user preferences. Further details of the TPCDS 110 data collection and analysis will be described in connection with figure 3.
Upon determining which third party content to present and identifying the appropriate end point, the TPCDS 110 retrieves the third party content and the appropriate end point from their respective data stores 112, 116. The TPCDS 110 connects third party content and the end endpoint 122 and provides the connected third party content and the appropriate end endpoint 122 to the user device 104. Data representing the connected media file 122 is sent for presentation on the user device 104. The TPCDS 110 provides the third party content and the ending endpoint 122 in a format that makes the presentation appear seamless and free of glitches or interruptions. For example, the user application 124 renders third-party content and then seamlessly transitions to rendering of the end-point 122.
In some implementations, when the end endpoint is appended to the third-party content, the presentation duration of the end endpoint extends the presentation duration of the third-party content. For example, assume that the given third-party content duration is 20 seconds and the presentation duration of the end-point is set to 10 seconds. In this example, the presentation duration of the third party content may be extended to 30 seconds, such that when the third party content completes playing, the play timer will continue to count for 30 seconds, giving the appearance that the ending endpoint is part of the third party content.
Typically, the user device 104 utilizes a media content player 124 for media presentation 126. The media content player 124 displays the presentation duration of the presented media. In some implementations, during playback of the connected third-party content and end endpoint 126, the media content player 124 will display the presentation durations of the additional end endpoint and third-party content in the extended duration indicator.
Fig. 2 is a block diagram illustrating a media content player 200 that includes a play indicator 202. The play indicator 202 shows the total duration of the media file being rendered by the media content player 200. For example, the play indicator 202 shows that the total duration of the media file is 35 seconds. The play indicator 202 includes a progress marker 203 that displays the progress of the play of the media file (e.g., which portion of the media file is currently being presented). For example, as shown in progress marker 203 of FIG. 2, the play of the media file has reached 10 seconds, for a total of 35 seconds.
As described above, the presentation duration of the end endpoint provided with the given third-party content may be added to the duration of the third-party content. In this example, the third-party content duration is 30 seconds, as shown in portion 204 of the play indicator 202, while the presentation duration of the end-point is 5 seconds, as shown in portion 206 of the play indicator 202. Thus, the combined duration of the third party content and the end-point presentation is 35 seconds, which is the total presentation time indicated by the play indicator. In this example, when the play of the third-party content reaches the 30 second mark (i.e., at the end of the third-party content duration), the media content player 200 will seamlessly transition to the presentation of the end endpoint for an additional 5 seconds. When the transition to the ending endpoint occurs, the progress marker 203 will continue to advance from 30-35 seconds even though the media content has ended.
In some implementations, the third-party content distribution system 110 generates code that causes the play indicator 202 to display a combined duration of the third-party content and the presentation duration of the end endpoint, and causes the progress marker to continue to advance after the end of the play of the third-party content (e.g., up to 30 seconds in the above example).
Typically, during presentation of third-party content, data encoded within the third-party content generates progress events (e.g., pings) at regular intervals. In some implementations, the progress event updates the media content player 200 such that the progress marker follows the play indicator 202. For example, a progress event may be generated every second. Thus, every second, upon receiving a progress event, the media content player 200 advances the progress marker 203 to a position corresponding to the next second.
In some implementations, the end endpoint includes code that simulates a progress event, similar to code generated by third-party content, that causes the progress marker 203 to advance the play indicator 202 during presentation of the end endpoint. The simulated progress event enables the play indicator to continue presenting time tracking even if the third party content has ended, and the ending endpoint may be a static image. In addition, the simulation progress event (or other code provided with the end endpoint) informs the media content player 200 that the presentation of third party content has been converted to the presentation of the end endpoint, preventing errors in the data tracked and reported by the media content player 200 to the TPCDS 110, as discussed in more detail below.
In some implementations, when the end endpoint starts playing, the play indicator may reset, displaying only the duration of the end endpoint. For example, if the third-party content duration is 30 seconds and the end-point duration is 5 seconds, the play indicator 202 will display a total duration of 30 seconds during the presentation of the third-party content. After the third party content completes rendering, the end endpoint will begin rendering and the play indicator 202 will reset and be displayed for a total duration of 5 seconds.
Fig. 3 is a block diagram illustrating a media content player 200 displaying an ending endpoint 301 that includes an action call 306. In some implementations, the ending endpoint 301 displays the last frame of the still image, video, or third-party content. For example, when the play of the third-party content reaches the final video frame, the final frame may remain displayed for the entire end-point presentation duration. Additionally, the font, color, texture, and other features of the end-point may be customized by third-party content providers to match brands or for other purposes.
Embedded in the end-point is an action-invoking element 306 that enables a user to interact with the end-point. As previously described, user interaction with an action invocation initiates an action performed by the user device. For example, the action call may be a "contact" action call that initiates a call placed from the user device. In some implementations, the end endpoint may have multiple action calls embedded into the end endpoint.
As shown in fig. 3, the progress marker 203 has progressed beyond the total duration of the third-party content (e.g., 30 seconds) and proceeds based on a mock progress event (e.g., ping) generated based on code provided to the user device 104 with the ending endpoint.
As described above, during the playing of the third-party content, a progress event generated by the third-party content causes the progress marker 203 to visually advance through the play indicator 202. These progress events are also used for other purposes. For example, these progress events can be used to determine how much third-party content has been presented. For example, during each play of a given third-party content, the number (or type) of progress events detected may indicate that some specified portion of the third-party content was presented in media content player 200 during that play. More specifically, the progress event can be used to determine whether playback of the third-party content is complete, whether a pre-specified portion of the third-party content is presented, and/or whether the user stops presentation of the third-party content before playback of the third-party content (or some specified portion) is complete.
In some implementations, the user device 104 sends the progress event data to a Third Party Content Distribution System (TPCDS) 110. The progress event data may indicate a number of progress events generated during the playing of the third-party content and/or information about a portion of the presented third-party content. The TPCDS 110 uses the total duration of the third party content and the received progress event data to determine a third party content indicator. The third-party content indicator is a metric that describes a user's preference characteristics for third-party content based on the received progress event data. For example, the third-party content indicator may describe, for particular third-party content, the number (or portion) of users who completed viewing the third-party content, the average number of third-party content viewed by the users, and other metrics describing the user's preferences for the third-party content.
As previously described, the ending endpoint generates a simulated progress event. To prevent that the simulated progress event may deviate from measurements or statistics related to the presentation of the third-party content (e.g., how many times the third-party content is completely played by the user), the media content player 200 stops monitoring for progress events when the media content player 200 begins receiving the simulated progress event (e.g., after reaching the end of the third-party content). Thus, the media content player 200 does not use the simulated progress event to determine how long the user viewed the third-party content or whether the playing of the third-party content is complete.
When the media content player 200 begins receiving the simulated progress event, the media content player 200 tracks the interaction event. In some implementations, the interaction event is data that indicates a user's interaction with the action call 306. For example, an interaction event may be created when a user skips an ending endpoint, the user engages in an action invocation, the user views the entire ending endpoint but does not skip or engage in the ending endpoint, and other information describing the user's interaction with the action invocation.
The media content player 200 sends interaction events to the TPCDS 110. The TPCDS 110 may use the interaction event to determine an end-point indicator. The end-point indicator determines user engagement of the end-point using the interaction event. For example, the end-point indicator may describe whether the user is engaged in an action invocation, skipping an end-point, viewing the entire end-point, and other actions or non-actions of the user with respect to particular third-party content.
The TPCDS 110 may use the determined end-point indicator and the previously described third-party content indicators to create analytics data based on user participation and user preferences. The analytics data may provide end-point feedback and third-party content feedback to the third-party content provider. For example, the TPCDS may determine the type of end-point preferred by the user, whether a particular end-point is skipped by most users, whether a particular end-point is engaged by most users, the frequency with which a particular end-point is engaged by a user, the subject matter of third-party content preferred by a user, and other end-point indications that may be used as information data. In some implementations, the end-point indicator and third-party content indicator enable the TCPDS 110 to provide information to third-party content providers regarding the level of success or interest generated by a particular end-point or third-party content. For example, information may be provided to third party content providers, such as which third party content and which end endpoints are most effective, which type of end endpoint is most effective, which type of end endpoint and third party content are specific demographic preferences, and other analytics describing the user's preferences for specific end endpoints and third party content.
The TCPDS 110 may provide the end point to the user based on the preferences of the particular user. As previously described, the TCPDS 110 may determine the preferences of a given user based on the received end-point indicator and third-party content indicator. For example, the TCPDS 110 may determine whether a given user prefers a truly short end-point (e.g., 2-5 seconds) based on the end-point indicator and the third-party content indicator. Thus, the TCPDS 110 determines to send the shorter ending endpoint to a given user based on the determined user's preference for the shorter ending endpoint.
FIG. 4 is a flow chart describing an example process for providing third-party content with an ending endpoint for presentation on a user device. The operations of process 400 may be performed by one or more data processing devices, such as Third Party Content Distribution System (TPCDS)110 of fig. 1. The operations of process 400 may also be implemented by instructions stored on a non-transitory computer-readable medium. Execution of the instructions causes one or more data processing apparatus to perform the operations of process 400.
The TPCDS receives a request to present media content at a user device (402). In some implementations, the media content request includes a second request for third-party content, which is presented with the media content. The third party content may be presented with the media content such that the presentation of the third party content and the media content data is presented at the user device. The request for media content may be generated from a user interaction with a link (e.g., a hyperlink).
The TPCDS identifies the user device capabilities from the data sent with the request (404). The user equipment capabilities describe information about the user equipment's ability to utilize a particular end-point. The capabilities of the user device to utilize a particular end point may include the capability of the user device to play the particular end point on the user device and the capability of the user device to receive instructions from the action call that result in performing further actions on the user device. In some implementations, the user device capabilities can include user device memory size, user device type, operating system of the user device, version of operating system of the user device, native applications and native application versions installed on the user device 104, user device location, type of user device, and other data indicating user device compatibility.
Based on the second request, the TPCDS determines third party content to present with the media content (406). In some implementations, the third-party content includes a first presentation duration indicating a length of time for which the third-party content is presented. During the first presentation duration, the user application transmits a progress event indicating that third-party content is currently being presented at the user device. Therefore, the current playing state can be accurately presented on the progress mark indicating the playing progress of the third-party content and the end endpoint.
The TCPDS determines that the ending endpoint is compatible with the user device and related to third party content (408). In some implementations, the end endpoint provides an action call that is an actionable link corresponding to a resource different from the third-party content and navigates the user device to content within the actionable link corresponding resource. For example, an action invocation is an interactive opportunity that enables a user to take one or more specified actions by interacting with an ending endpoint. For example, the ending endpoint may include an active link that initiates various actions in response to user interaction with the active link. Various actions corresponding to interacting with an active link may include downloading a native application, placing a phone call, while another may request a particular web page. The ending endpoints with the same content may be presented on a variety of different user device platforms depending on the format selected for the ending endpoint by the TPCDS. The format of the ending endpoint is selected based on metadata received with the third party content and the media request.
Data is sent to the user device that presents the third party content, the end point, and the media content (410). In some implementations, the end endpoint data is appended at the end of the third-party content data such that the end endpoint is presented after the third-party content. Thus, the third-party content is played on the user device for a first duration, and the end endpoint plays on the user device for a presentation duration that is the difference between the second presentation duration and the first presentation duration. In some implementations, the last frame of the third-party content remains displayed for the entire presentation duration of the end endpoint.
In other embodiments, the end-point data may be appended to the beginning or middle of the third-party content data. Thus, the end endpoint may be presented at any time during the presentation of the third party content. In addition, the end endpoint extends the first presentation duration of the third-party content to a second presentation duration that is a cumulative time of presentation of the third-party content and the end endpoint.
Extending the presentation duration may provide an accurate presentation duration and provide an operating point that describes when presentation of third party content stops and when ending endpoint presentation begins. In some embodiments, the ending endpoint includes coded instructions for sending a simulated progress event that proceeds with a progress marker that is a visual display of an elapsed presentation duration. The simulated progress event enables the user device to identify when presentation of the third-party content ends and when presentation of the end-point begins.
Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus. The computer storage medium may be or be included in a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Further, although the computer storage medium is not a propagated signal, the computer storage medium can be a source or destination of computer program instructions encoded in an artificially generated propagated signal. The computer storage medium may also be or be included in one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
The operations described in this specification may be implemented as operations performed by data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
The term "data processing apparatus" includes all types of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or a plurality or combination of the foregoing. An apparatus may comprise special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment may implement a variety of different computing model infrastructures, such as web services, distributed computing, and grid computing infrastructures.
A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. The computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with the instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not require such a device. Furthermore, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable memory device (e.g., a Universal Serial Bus (USB) flash drive), to name a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile Memory, media and Memory devices, including, for example, semiconductor Memory devices, such as EPROM (Erasable Programmable Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory) and flash Memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; and CD-ROM (Compact disk Read-Only Memory) and DVD-ROM (Digital Video disk-Read-Only Memory) disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which input is provided to the computer. Other types of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. In addition, the computer may interact with the user by sending and receiving documents to and from the device used by the user; for example, by sending a web page to a web browser on the user's client device in response to a request received from the web browser.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes: a back-end component, e.g., as a data server; or include middleware components, such as application servers; or include a front-end component, such as a client computer having a graphical user interface or a web browser through which a user may interact with an embodiment of the subject matter described in this specification; or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include local area networks ("LANs") and wide area networks ("WANs"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server sends data (e.g., HTML pages) to the client device (e.g., for the purpose of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) may be received from the client device at the server.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated within a single software product or packaged into multiple software products.
Thus, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.

Claims (18)

1. A method performed by one or more data processing apparatus, the method comprising:
selecting, by the one or more data processing apparatus, a media file to be presented at a given client device, the media file comprising a first presentation duration indicating a length of time the media file was presented;
identifying capabilities of the given client device;
determining, by the one or more data processing apparatus, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated playout delay that causes a visual playout indicator to progress beyond the first presentation duration; and
sending, by the one or more data processing apparatus, data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at an end of the media file such that the end endpoint is separate from the media file and is presented after the media file.
2. The method of claim 1, wherein third-party content plays the first presentation duration on the given client device and the end endpoint plays a second presentation duration on the given client device that is equal to a difference between a total presentation duration and the first presentation duration.
3. The method of claim 2, wherein a last frame of the third-party content remains displayed for at least a portion of a second presentation duration of the end endpoint.
4. The method of claim 1, wherein the capabilities of the given client device comprise at least one of: a given client device memory size, a given client device operating system version, an application installed on a given client device, or a type of a given client device.
5. The method of claim 1, wherein the end-point display action call.
6. The method of claim 5, wherein the action call is an actionable link corresponding to a resource that is different from third-party content, and the given client device is navigated to content within the resource corresponding to the actionable link.
7. A system for processing data, comprising:
one or more data processing devices; and
software stored in a non-transitory computer readable storage medium that stores instructions executable by a data processing apparatus and that upon such execution cause the data processing apparatus to perform operations comprising:
selecting a media file to be presented at a given client device, the media file including a first presentation duration indicating a length of time the media file is presented;
identifying capabilities of the given client device;
determining, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated play delay that causes a visual play indicator to be played beyond the first presentation duration; and
sending data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at the end of the media file such that the end endpoint is separate from the media file and presented after the media file.
8. The system of claim 7, wherein third-party content plays the first presentation duration on the given client device and the end endpoint plays a second presentation duration on the given client device that is equal to a difference between a total presentation duration and the first presentation duration.
9. The system of claim 8, wherein a last frame of the third-party content remains displayed for at least a portion of a second presentation duration of the end endpoint.
10. The system of claim 7, wherein the capabilities of the given client device comprise at least one of: a given client device memory size, a given client device operating system version, an application installed on a given client device, or a type of a given client device.
11. The system of claim 7, wherein the end endpoint display action call.
12. The system of claim 11, wherein the action call is an actionable link corresponding to a resource that is different from third-party content, and the given client device is navigated to content within the resource corresponding to the actionable link.
13. A computer storage medium encoded with a computer program, the program comprising instructions that when executed by data processing apparatus cause the data processing apparatus to perform operations comprising:
selecting a media file to be presented at a given client device, the media file including a first presentation duration indicating a length of time the media file is presented;
identifying capabilities of the given client device;
determining, based on the identified capabilities, that an end endpoint is compatible with the given client device and related to the media file, the end endpoint being content that exceeds content provided by the media file and providing an interaction opportunity to request one or more resources, wherein the end endpoint generates a simulated play delay that causes a visual play indicator to be played beyond the first presentation duration; and
sending data to the given client device to sequentially present the media file and the end endpoint, wherein end endpoint data is appended at the end of the media file such that the end endpoint is separate from the media file and presented after the media file.
14. The computer storage medium of claim 13, wherein third-party content plays the first presentation duration on the given client device, and the end endpoint plays a second presentation duration on the given client device that is equal to a difference between a total presentation duration and the first presentation duration.
15. The computer storage medium of claim 14, wherein a last frame of the third-party content remains displayed for at least a portion of the second presentation duration of the ending endpoint.
16. The computer storage medium of claim 13, wherein the capabilities of the given client device include at least one of: a given client device memory size, a given client device operating system version, an application installed on a given client device, or a type of a given client device.
17. The computer storage medium of claim 13, wherein the end endpoint display action call.
18. The computer storage medium of claim 17, wherein the action call is an actionable link corresponding to a resource that is different from third-party content, and the given client device is navigated to content within the resource that corresponds to the actionable link.
CN202010008743.9A 2016-03-28 2017-02-02 Data processing method, system and storage medium Active CN111182333B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/082,815 2016-03-28
US15/082,815 US9774891B1 (en) 2016-03-28 2016-03-28 Cross-platform end caps
CN201780015352.6A CN108702526B (en) 2016-03-28 2017-02-02 Data processing method and system
PCT/US2017/016234 WO2017172037A1 (en) 2016-03-28 2017-02-02 Cross-platform end caps

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201780015352.6A Division CN108702526B (en) 2016-03-28 2017-02-02 Data processing method and system

Publications (2)

Publication Number Publication Date
CN111182333A CN111182333A (en) 2020-05-19
CN111182333B true CN111182333B (en) 2022-04-26

Family

ID=58044201

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780015352.6A Active CN108702526B (en) 2016-03-28 2017-02-02 Data processing method and system
CN202010008743.9A Active CN111182333B (en) 2016-03-28 2017-02-02 Data processing method, system and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201780015352.6A Active CN108702526B (en) 2016-03-28 2017-02-02 Data processing method and system

Country Status (4)

Country Link
US (2) US9774891B1 (en)
EP (1) EP3395076A1 (en)
CN (2) CN108702526B (en)
WO (1) WO2017172037A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10925009B2 (en) * 2019-05-27 2021-02-16 Apple Inc. Dynamic processing resource allocation across multiple carriers
US10924823B1 (en) * 2019-08-26 2021-02-16 Disney Enterprises, Inc. Cloud-based image rendering for video stream enrichment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780929A (en) * 2012-10-22 2014-05-07 索尼公司 Method and system for inserting an advertisement in a media stream
CN104838627A (en) * 2012-10-23 2015-08-12 微软技术许可有限责任公司 Buffer ordering based on content access tracking
CN104967911A (en) * 2014-11-19 2015-10-07 腾讯科技(北京)有限公司 Multimedia file insertion position determining method and apparatus
CN105284119A (en) * 2013-06-10 2016-01-27 谷歌公司 Providing supplemental content in relation to embedded media

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070226240A1 (en) * 2000-01-19 2007-09-27 Sony Ericsson Mobile Communications Ab Technique for providing data objects prior to call establishment
US7769756B2 (en) * 2004-06-07 2010-08-03 Sling Media, Inc. Selection and presentation of context-relevant supplemental content and advertising
US20090013347A1 (en) * 2007-06-11 2009-01-08 Gulrukh Ahanger Systems and methods for reporting usage of dynamically inserted and delivered ads
US9595040B2 (en) 2009-10-09 2017-03-14 Viacom International Inc. Integration of an advertising unit containing interactive residual areas and digital media content
US20120215646A1 (en) 2009-12-09 2012-08-23 Viacom International, Inc. Integration of a Wall-to-Wall Advertising Unit and Digital Media Content
US20130325613A1 (en) 2012-05-29 2013-12-05 Thien Van Pham System and method for displaying geo-location based video advertisements with advertisements layered on top of video advertisements that incorporate an Add to Cart button or other button of the likes
US9066159B2 (en) * 2012-10-23 2015-06-23 Hulu, LLC User control of ad selection for subsequent ad break of a video
CN103546782B (en) 2013-07-31 2017-05-10 Tcl集团股份有限公司 Method and system for dynamically adding advertisements during video playing
US8875175B1 (en) * 2013-08-30 2014-10-28 Sony Corporation Smart live streaming event ads playback and resume method
US9524083B2 (en) 2013-09-30 2016-12-20 Google Inc. Customizing mobile media end cap user interfaces based on mobile device orientation
US9635398B2 (en) 2013-11-01 2017-04-25 Adobe Systems Incorporated Real-time tracking collection for video experiences
CN104602041B (en) * 2014-12-24 2018-03-20 北京畅游天下网络技术有限公司 Content providing device and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103780929A (en) * 2012-10-22 2014-05-07 索尼公司 Method and system for inserting an advertisement in a media stream
CN104838627A (en) * 2012-10-23 2015-08-12 微软技术许可有限责任公司 Buffer ordering based on content access tracking
CN105284119A (en) * 2013-06-10 2016-01-27 谷歌公司 Providing supplemental content in relation to embedded media
CN104967911A (en) * 2014-11-19 2015-10-07 腾讯科技(北京)有限公司 Multimedia file insertion position determining method and apparatus

Also Published As

Publication number Publication date
EP3395076A1 (en) 2018-10-31
CN111182333A (en) 2020-05-19
US10123057B2 (en) 2018-11-06
US9774891B1 (en) 2017-09-26
CN108702526A (en) 2018-10-23
WO2017172037A1 (en) 2017-10-05
US20170280175A1 (en) 2017-09-28
US20170280174A1 (en) 2017-09-28
CN108702526B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
US10742526B2 (en) System and method for dynamically controlling sample rates and data flow in a networked measurement system by dynamic determination of statistical significance
US20190268427A1 (en) Multi computing device network based conversion determination based on computer network traffic
KR101155711B1 (en) Advanced advertisements
US20100005403A1 (en) Monitoring viewable times of webpage elements on single webpages
US20100010890A1 (en) Method and System for Measuring Advertisement Dwell Time
US20140278853A1 (en) Extrinsic incentivized scaffolding in computer games via advertising responsive to intrinsic game events
WO2013181518A1 (en) Providing online content
KR20160049543A (en) Content selection with precision controls
US20150213485A1 (en) Determining a bid modifier value to maximize a return on investment in a hybrid campaign
US20210365164A1 (en) User interface engagement heatmaps
US9720889B1 (en) Systems and methods for detecting auto-redirecting online content
US9094735B1 (en) Re-presentation of previously presented content
US20200238179A1 (en) System and method for managing dynamic opt-in experiences in a virtual environment
CN111182333B (en) Data processing method, system and storage medium
CN109034867A (en) click traffic detection method, device and storage medium
US10715864B2 (en) System and method for universal, player-independent measurement of consumer-online-video consumption behaviors
US9392041B2 (en) Delivery of two-way interactive content
US20180268435A1 (en) Presenting a Content Item Based on User Interaction Data
US9786014B2 (en) Earnings alerts
US20210235163A1 (en) Validating interaction with interactive secondary content
KR20010035371A (en) Method of internet-advertisement using full-screen moving picture
US11102546B1 (en) Systems and methods for obtaining and displaying videos
US20230121242A1 (en) Systems and methods for stateless maintenance of a remote state machine
WO2021151105A1 (en) Validating interaction with interactive secondary content
Percival In-Application Advertising

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant