US20190261054A1 - Dynamic content generation - Google Patents

Dynamic content generation Download PDF

Info

Publication number
US20190261054A1
US20190261054A1 US16/259,681 US201916259681A US2019261054A1 US 20190261054 A1 US20190261054 A1 US 20190261054A1 US 201916259681 A US201916259681 A US 201916259681A US 2019261054 A1 US2019261054 A1 US 2019261054A1
Authority
US
United States
Prior art keywords
content
placeholder
user
main content
media object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/259,681
Other versions
US11589125B2 (en
Inventor
Christian Souche
Lucia GATTONI
Edouard Mathon
Richard Vidal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Accenture Global Solutions Ltd
Original Assignee
Accenture Global Solutions Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Accenture Global Solutions Ltd filed Critical Accenture Global Solutions Ltd
Assigned to ACCENTURE GLOBAL SOLUTIONS LIMITED reassignment ACCENTURE GLOBAL SOLUTIONS LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GATTONI, Lucia, MATHON, EDOUARD, Souche, Christian, VIDAL, RICHARD
Publication of US20190261054A1 publication Critical patent/US20190261054A1/en
Application granted granted Critical
Publication of US11589125B2 publication Critical patent/US11589125B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • H04N21/4725End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content using interactive regions of the image, e.g. hot spots
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • Such content may include, but is not limited to, audio files, videos and images.
  • Such media objects to be inserted in a content may have to be selected based on various criteria, such as regulatory guidelines and historical data of a user. For example, in a video content showing an alcoholic beverage, the alcoholic beverage may be replaced with a non-alcoholic beverage in countries where displaying alcohol-based beverage is not allowed.
  • targeted product display refers to an approach where products and services are offered to an individual as the media objects in a content, based on preferences that can be determined from historical records of the user. Such a focused approach assists organizations to cater to consumers based on their specific preferences.
  • FIG. 1 illustrates a block diagram of a system, according to an example embodiment of the present disclosure
  • FIG. 2 illustrates another block diagram depicting functionalities of the system, according to another example embodiment of the present disclosure.
  • FIG. 3 illustrates moving location of a placeholder in an animation, according an example embodiment of the present disclosure.
  • FIG. 4 illustrates a hardware platform for implementation of the system, according to an example embodiment of the present disclosure.
  • FIG. 5 illustrates a computer-implemented method depicting functionality of the system, according to an example embodiment of the present disclosure.
  • a system comprising a receiver to receive a main content.
  • the main content includes at least one of a still image, an audio content or a video content.
  • the system further comprises a detector in communication with the receiver to detect at least one potential placeholder, hereinafter placeholder, in the main content for placement of a media object.
  • the media object includes at least one of an audio file, a video file, an image, or a text.
  • the placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the system comprises an obtainer in communication with the receiver and the detector.
  • the obtainer is to obtain a plurality of media objects having placement attributes corresponding to the placeholder in the main content, where a placement attribute is indicative of characteristic(s) of a media object compatible with the placeholder, for instance to appropriately fit in the placeholder.
  • the system further comprises a selector in communication with the receiver, the detector, and the obtainer.
  • the selector is to select a media object from among the plurality of media objects for being placed in the placeholder of the main content, based on a user profile.
  • the system comprises a generator in communication with the receiver, the detector, the obtainer, and the selector. The generator is to generate a final content indicative of the selected media object embedded in the main content.
  • a system comprising a receiver to receive a main content.
  • the main content includes at least one of a still image, an audio content or a video content.
  • the system further comprises a detector in communication with the receiver to detect a placeholder in the main content for placement of a media object.
  • the media object includes at least one of an audio file, a video file, an image, or a text.
  • a placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the system comprises an obtainer in communication with the receiver and the detector to obtain a plurality of media objects having placement attributes corresponding to the placeholder in the main content, wherein a placement attribute is indicative of characteristics of a media object to fit in the placeholder.
  • the system further comprises a selector in communication with the receiver, the detector, and the obtainer.
  • the selector is to provide the plurality of media objects to a user.
  • the selector further is to receive an instruction from the user, the instruction being indicative of selection of a media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
  • the system comprises a generator in communication with the receiver, the detector, the obtainer, and the selector. The generator is to generate a final content indicative of the selected media object embedded in the main content.
  • a computer-implemented method executed by at least one processor comprises receiving a main content, where the main content includes at least one of a still image, an audio content or a video content.
  • the method further comprises detecting a placeholder in the main content for placement of a media object.
  • the media object includes at least one of an audio file, a video file, an image, or a text.
  • a placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the method comprises obtaining a plurality of media objects having placement attributes corresponding to the placeholder in the main content, where a placement attribute is indicative of characteristics of a media object to fit in the placeholder.
  • the method comprises selecting one of the plurality of media objects for being placed in the placeholder of the main content, based on a user profile.
  • the method further comprises generating a final content indicative of the selected media object embedded in the main content.
  • the present disclosure is described by referring mainly to examples thereof.
  • the examples of the present disclosure described herein may be used together in different combinations.
  • details are set forth in order to provide an understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to all these details.
  • the terms “a” and “an” are intended to denote at least one of a particular element.
  • the term “includes” means includes but not limited to, the term “including” means including but not limited to.
  • the term “based on” means based at least in part on.
  • advertisements for products and/or services may be rendered to users when they are streaming multimedia content such as, for example, a video file, an audio file, a still image, or any combination thereof, over the Internet.
  • multimedia content such as, for example, a video file, an audio file, a still image, or any combination thereof
  • Different advertisements for different products and/or services may be rendered to different users based on a variety of factors, at various times.
  • media objects associated with products and/or services may be added and/or replaced within the multimedia content before/during presentation of the multimedia content to the user, in order to render advertisements to a user.
  • the multimedia content may be rendered to users with different media objects added and/or replaced therein in different geographical areas.
  • Advertisements using targeted advertising techniques are generally rendered to users when they are streaming content such as, for example, movies and images, over the Internet.
  • Target advertising involves identifying potential customers based on user data associated therewith.
  • the user data associated with a user is indicative of preferences of the user.
  • the preferences of the user may be determined and, accordingly, selected advertisements can be rendered to the user.
  • the goal is to increase the probability of the customer buying the advertised product or service because the product or service is related to the customer's preference.
  • the disclosed techniques may be used in other domains as well.
  • the disclosed techniques may be used to distribute multimedia content that raises social awareness about one or more issues, where the content is modified based on factors such as geographical region, cultural norms, regulatory guidelines, and the like.
  • multiple versions of a single content may be generated and stored.
  • Each of the multiple versions includes one or more media objects related to a specific user preference.
  • a version of the content most specific to the preference of the user can be delivered to that user.
  • a first version of a video A may include media objects related to a user preference, say cars.
  • the first version may include a media object related to a car and another media object related to a car cleaning service.
  • a second version of the video A may include media objects related to a user preference, say, apparel shopping.
  • the second version of the video A may include a media object related to an online shopping portal and another advertisement to a clothing brand.
  • different versions of the same video A include media objects related to different user preferences. Now, when a user who has an interest in cars, seeks to watch the video A, the version of the video A that includes the media objects related to cars is rendered to the user.
  • generation and storage of the multiple versions for the same content is a resource intensive task. For instance, repeated processing operations are to be performed for generating the multiple versions. Furthermore, given the considerable extent and variety of possible user preferences, a substantial number of versions may need to be created for the content thereby consuming significant amounts of storage space.
  • the system receives a main content such as, for example, a still image, an audio content, and a video content.
  • a main content such as, for example, a still image, an audio content, and a video content.
  • the system is to detect a placeholder in the main content for placement of a media object.
  • the placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the media object may include an audio file, a video file, an image, and/or text. Further, the media object may be rendered as an advertisement to a user.
  • the system further obtains a plurality of media objects having placement attributes corresponding to the placeholder in the main content.
  • a placement attribute is indicative of characteristics of a media object compatible with the placeholder such as, for example, to fit in the placeholder.
  • the system Upon obtaining the plurality of media objects, the system is to select one of the plurality of media objects for being placed in the placeholder of the main content.
  • the media object to be placed in the placeholder is selected based on a user profile.
  • the system is to provide the plurality of media objects to the user.
  • the system is to receive an instruction indicative of selection of a media object for being placed in the placeholder of the main content from the user. Based on the selected media object, the system generates a final content indicative of the selected media object embedded in the main content.
  • the system of the present disclosure offers a comprehensive and time-effective approach for dynamic generation of content with media objects.
  • the proposed approach averts a need to generate and store of multiple versions of the content. As a result, processing load and usage of storage space is reduced. Furthermore, placement of suitable media objects in the placeholder produces more effective advertisements. Further, the system offers multiple techniques for selection of the media object to be embedded in the main content. Therefore, the system and the method of the present disclosure offer a comprehensive, efficient, and time-effective dynamic generation of the content with the media objects.
  • FIG. 1 illustrates a schematic view of a system 100 for dynamic generation of content with media objects, according to an example of the present disclosure.
  • the content may include at least one of a still image, an audio content or a video content.
  • the system 100 may include a receiver 102 , a detector 104 , an obtainer 106 , a selector 108 , a generator 110 , a renderer 112 , and a converter 114 .
  • the detector 104 may be in communication with the receiver 102 .
  • the obtainer 106 may be in communication with the receiver 102 and the detector 104 .
  • the selector 108 may be in communication with the receiver 102 , the detector 104 , and the obtainer 106 .
  • the generator 110 may be in communication with the receiver 102 , the detector 104 , the obtainer 106 , and the selector 108 .
  • the renderer 112 and the converter 114 may be in communication with generator 110 .
  • the receiver 102 may receive the content also referred to as main content.
  • the main content may include, but is not limited to, the still image, the audio content, and the video content.
  • the audio content may be encoded in an Advanced Audio Coding (AAC) format, an MP3 format, or an OGG format.
  • the video content may be encoded in an MPEG2 format or an X264 format.
  • the still image may be encoded in a PNG format or a JPEG format.
  • AAC Advanced Audio Coding
  • the main content may be encoded in other formats not disclosed above without departing from the scope of the disclosure.
  • the receiver 102 may receive metadata associated with the main content as well.
  • metadata may include, but is not limited to, menus, chapter, and subtitles of the main content.
  • the content may be received from a data repository (not shown), which may be an internal data repository or an external data repository.
  • the detector 104 may detect one or more placeholders in the main content for placement of media objects.
  • a placeholder is indicative of a position in the main content for placing a media object.
  • the placeholder in the main content may be defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the media object may include, but is not limited to, an audio file, a video file, an image, or a text file.
  • the placeholder for the placement of the media object is defined based on the time stamp.
  • the placeholder may be defined as a spot in the audio between a time stamp 4:30:00 minutes to a time stamp 4:30:08 minutes. Therefore, the main content, i.e., the audio content, has a placeholder of 8 seconds between the mentioned duration for placement of a media object.
  • the placeholder when the main content is the video content, the placeholder may be defined based on the frame range. For example, the placeholder may be defined as a spot in the video between the 50 th frame and 53 rd frame. Therefore, the main content, i.e., the video content, has a placeholder for placing the media object for a duration of 4 frames.
  • the placeholder may be a reference area existing through one or more frames of the video content. For example, the placeholder may be a surface of a drink can held by an actor in the video content.
  • the placeholder when the main content is the still image, the placeholder may be defined based on the reference area. For example, if the still image depicts a bus, the placeholder may be defined as a side of a bus in the still image.
  • the obtainer 106 may obtain a plurality of media objects having placement attributes corresponding to a placeholder in the main content.
  • a placement attribute is indicative of characteristics of a media object to fit in the placeholder.
  • the placement attributes may include, but are not limited to, dimensions of the media object and a play duration of the media object.
  • the obtainer 106 may obtain media objects that can fit within the duration of 6 seconds.
  • the obtainer 106 may obtain one media object with the play duration of 6 seconds.
  • the obtainer 106 may obtain two media objects with collective play durations of 6 seconds.
  • the obtainer 106 may obtain the media objects from the data repository as explained earlier.
  • the obtainer 106 may obtain the media objects from an object data repository (not shown) that is independent of the data repository of the main content.
  • the selector 108 may select one of the plurality of media objects for being placed in the placeholder of the main content.
  • the selector 108 may select a media object to be placed in the placeholder, based on a user profile.
  • the user profile is indicative of preferences of a user with respect to viewing of content.
  • a profile of the user may be maintained based on historical usage data of the user.
  • the system 100 may keep a record of activities of the user and predict preferences of the user accordingly.
  • the selector 108 may provide the plurality of media objects obtained by the obtainer 106 to the user.
  • the selector 108 may provide the plurality of media objects to the user in form of a list through a Graphical User Interface (GUI) (not shown).
  • GUI Graphical User Interface
  • the list may be shown on a multimedia device (not shown) used by the user.
  • the multimedia device may include, but is not limited to, a personal computing device, a smart phone, a laptop, an infotainment system installed in an automobile, an in-flight infotainment system, and a smart television.
  • the selector 108 may receive an instruction from the user.
  • the instruction is indicative of selection of a media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
  • the generator 110 may generate a final content.
  • the final content is indicative of the selected media object embedded in the main content.
  • the generator 110 may integrate at least one content enhancement effect to the selected media object embedded in the main content.
  • the content enhancement effect may include, but is not limited to, a blur effect, a sharpness effect, a saturation effect, a brightness effect, a hue effect, and a contrast effect.
  • the renderer 112 may render the final content to the user.
  • the final content may not be in a format that can be rendered by the renderer 112 to the user.
  • the converter 114 may convert the final content into a new format.
  • the converter 114 may detect whether the final content has to be converted or not, based on specification of a software or a hardware available for rendering the final content.
  • the software may invoke a request for conversion into the new format.
  • the user may invoke conversion of the final content into the new format. Subsequently, the converter 114 may render the final content to the user in the new format.
  • the renderer 112 may render the final content to the user through a hardware device, such as a monitor and a speaker.
  • the renderer 112 may be rendered in form of a data stream, a radio broadcast, and a file, without departing from the scope of the present disclosure.
  • FIG. 2 illustrates another block diagram depicting functionalities of the system 100 , according to another example embodiment of the present disclosure.
  • the components of the system 100 are already explained in detail in the description of FIG. 1 .
  • FIG. 2 is provided to provide a more detailed understanding and clarity of the present disclosure, and therefore, should not be construed as limiting. For the sake of brevity, features of the present disclosure that are already explained in the description of FIG. 1 are not explained in detail in the description of FIG. 2 .
  • the system 100 may process the main content 202 based on dynamic composition description details 204 and a list of media objects 206 , e.g., the media objects, to output dynamically generated content, i.e., the final content 208 .
  • the dynamic composition description details 204 may include compositional details of the main content 202 .
  • the compositional details may be described in a flat file.
  • the format of the flat file may include, but is not limited to, a JSON format, an XML format, and a TXT format.
  • the compositional details may include operational details of the main content 202 .
  • the operational details may include customization of sentences, music, and sounds in an audio track.
  • the operational details may include insertion of images, text, and 3D objects in the image.
  • the operational details may include insertion of images, text, 3D objects, and other videos in the video content.
  • the dynamic composition description details 204 may include details pertaining to the detected placeholder in the main content 202 .
  • the placeholder is defined based on at least one of the timestamp, the time range, the frame range, and the reference area in the main content 202 .
  • the system 100 may detect a location of the placeholder based on different coordinates over time dimension. Such coordinates may be defined on key frames.
  • a key frame is indicative of a frame of the animation that is being used as reference for locating the placeholder.
  • the system 100 may define two key frames, a first key frame and a second key frame. Specifically, a first frame and a twelfth frame of the animation may be defined as the first key frame and the second key frame, respectively.
  • the list of media objects 206 may be an organized data structure containing descriptions of media objects.
  • the list of media objects 206 may alternatively be an organized data structure containing descriptions of media objects and copies of the media objects.
  • the list of media objects 206 may also or alternatively include corresponding placement attributes.
  • the placement attributes may include a retrieval path of a corresponding media object.
  • the retrieval path may be located either locally on an internal or an external drive, for example, a hard drive, or in one of a memory or a cache.
  • the retrieval path may be located online where it may be accessible by a Unique Resource Location (URL).
  • URL Unique Resource Location
  • system 100 may be embodied in the form of a user application for content playback (e.g., a standalone mobile application, or alternatively, a plug-in or extension for an Internet browser).
  • Receiver 102 may receive a multimedia container 201 containing main content 202 , dynamic composition description details 204 , and the list of media objects 206 .
  • renderer 112 may render the final content at a back end server separate from a system displaying the final content to a user.
  • renderer 112 may render the final content locally on the system displaying the final content. Rendering the final content locally may be beneficial in cases where no Internet connection is available and also in cases where the system displaying the final content has enough CPU power and memory to render the final content itself.
  • the rendering may be done dynamically based on playback of the content.
  • the multimedia container 201 containing main content 202 , dynamic composition description details 204 , and the list of media objects 206 may be downloaded on a local machine such as a mobile device, smart TV or personal computer.
  • detector 104 may detect one or more placeholders in the main content 202 based on the dynamic composition description details 204 .
  • Obtainer 106 may obtain the list of media objects 206 and selector 108 may select the appropriate media object to add or replace into the main content 202 either by itself or based on information received from an external system or a user. All of the actions described above may occur in real time.
  • renderer 112 may render the replacement/additional media objects at a predetermined time such as, for example, 1 minute before playback and render the replacement/additional media objects in the background.
  • a targeted version of the content may be rendered upon receipt of the multimedia container 201 and then included in the playback of the final content.
  • FIG. 3 illustrates moving location of a placeholder 302 in an animation, according to one or more example embodiments of the present disclosure.
  • FIG. 3 illustrates moving location of a placeholder 302 in an animation, according to one or more example embodiments of the present disclosure.
  • features of the present disclosure that are already explained in the description of FIG. 1 and FIG. 2 are not explained in the description of FIG. 3 .
  • Block A in FIG. 3 illustrates the placeholder 302 being clearly visible in a key frame kf of the animation.
  • the location of the placeholder 302 is defined by a zone (d, e, f, g) in the key frame kf.
  • a vehicle 304 is also shown to be approaching the placeholder 302 .
  • Block B in FIG. 3 illustrates a key frame kf+1 of the animation.
  • the vehicle 304 is driving past the placeholder 302 .
  • a portion of the vehicle 304 acting as a mask to the placeholder 302 is depicted by coordinates (k, i, j, k, l, m).
  • the mask is defined by the coordinates.
  • Block C in FIG. 3 illustrates a key frame kf+2 of the animation
  • the portion of the vehicle 304 is acting as the mask to the placeholder 302 in coordinates (k′, l′, j′, k′, l′, m′).
  • Block D in FIG. 3 illustrates a key frame kf+3 of the animation.
  • the location of the placeholder 302 is now defined by a zone (d′, e′, f′, g′).
  • the portion of the vehicle 304 is acting as the mask to the placeholder 302 in the coordinates (k′, l′, j′, k′, l′, , m′).
  • the list of media objects 206 may also include the corresponding placement attributes.
  • the placement attributes include a retrieval path of a corresponding media object.
  • the retrieval path may be located either locally on an internal or an external drive, for example, a hard drive, or in one of a memory or a cache.
  • the retrieval path may be located online where it may be accessible by a Unique Resource Location (URL).
  • URL Unique Resource Location
  • the system 100 may determine that multiple media objects 206 may be positioned in a placeholder. After the processing, the system 100 may receive a selection signal 210 indicative of the selection of one of the media objects 206 for being shown to the user along with the main content 202 as the final content 208 .
  • the media object 206 may be selected based on the user profile or the user instruction.
  • an external selector may determine the media object 206 to be embedded in the main content 202 for rendering or conversion.
  • the external selector may share details pertaining to the user profile with the system 100 .
  • the system 100 may then determine the media object 206 to be embedded in the main content 202 .
  • the system 100 may retrieve the selected media object 206 based on the retrieval path disclosed in the corresponding placement attributes. Subsequently, the final content 208 may be rendered to the user.
  • the system 100 may generate multiple copies of the final content 208 . In another example embodiment, the system 100 may share the final content 208 with user through any network or communication protocol. In one example embodiment, the system 100 may play the final content 208 for the user through a local multimedia application. In another example embodiment, the system 100 may stream the final content 208 for the user through a browser player.
  • the system 100 may render the final content 208 through an ad hoc player that dynamically generates the final content 208 based on the selection signal 210 .
  • the system 100 may generate the final content 208 in a common multimedia file format.
  • FIG. 4 illustrates a hardware platform 400 for implementation of the system 100 , according to an example of the present disclosure.
  • the hardware platform 400 may be a computer system 400 that may be used with the examples described herein.
  • the computer system 400 may represent a computational platform that includes components that may be in a server or another computer system.
  • the computer system 400 may execute, by a processor (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions and other processes described herein.
  • a computer readable medium which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory).
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable, programmable ROM
  • EEPROM electrically erasable, programmable ROM
  • hard drives e.g., hard drives, and flash memory
  • the computer system 400 may include a processor 402 that may implement or execute machine readable instructions performing some or all of the methods, functions, techniques and/or other processes described herein. Commands and data from the processor 402 may be communicated over a communication bus 404 .
  • the computer system 400 may also include a main memory 406 , such as a random access memory (RAM), where the machine readable instructions and data for the processor 402 may reside during runtime, and a secondary data storage 408 , which may be non-volatile and stores machine readable instructions and data.
  • the memory 406 and data storage 408 are examples of non-transitory computer readable mediums.
  • the memory 406 and/or the secondary data storage may store data used by the system 100 , such as an object repository including web objects, configuration data, test data, etc.
  • the computer system 400 may include an Input/Output (I/O) device 410 , such as a keyboard, a mouse, a display, etc.
  • I/O Input/Output
  • a user interface (UI) 412 can be a communication device that provides textual and graphical user interfaces to a user of the system 100 .
  • the UI 412 may operate with I/O device 410 to accept from and provide data to a user.
  • the computer system 400 may include a network interface 414 for connecting to a network. Other known electronic components may be added or substituted in the computer system.
  • the processor 402 may be designated as a hardware processor.
  • the processor 402 may execute various components of the system 100 described above and perform the methods described below.
  • FIG. 5 illustrates a computer-implemented method 500 depicting functionality of the system 100 , according to an example of the present disclosure.
  • the method 500 includes receiving a main content.
  • the main content includes at least one of the still image, the audio content or the video content.
  • the receiver 102 of the system 100 may receive the main content.
  • a placeholder in the main content is detected for placement of a media object.
  • the media object includes at least one of an audio file, a video file, an image, and/or a text file.
  • the placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content.
  • the detector 104 of the system 100 may detect the placeholder.
  • the placeholder is defined based on the timestamp when the main content is the audio content. In another example embodiment, the placeholder is defined based on the frame range when the main content is the video content. In yet another example embodiment, the placeholder is defined based on the reference area when the main content is the still image.
  • a plurality of media objects having placement attributes corresponding to the placeholder in the main content is obtained.
  • the placement attribute is indicative of characteristics of a media object to fit in the placeholder.
  • the obtainer 106 of the system 100 may obtain the plurality of media objects.
  • one of the plurality of media objects is selected for being placed in the placeholder of the main content, based on the user profile.
  • the user profile is indicative of preferences of the user with respect to viewing of the content, based on the historical usage data of the user.
  • the media object to be placed in the placeholder is selected based on a user selection.
  • a user may provide preference by way of a user instruction.
  • the plurality of media objects is first provided to the user. Subsequently, an instruction from the user is received. The instruction is indicative of the selection of the media objects.
  • the selector 108 of the system 100 may select one of the plurality of media objects.
  • the final content indicative of the selected media object embedded in the main content is generated.
  • the generator 110 of the system 100 may generate the final content.
  • at least one content enhancement effect is integrated to the selected media object embedded in the main content.
  • the at least one content enhancement effect includes the blur effect, the sharpness effect, the saturation effect, the brightness effect, the hue effect, and the contrast effect.
  • the final content is then rendered to a user.
  • the method 500 includes converting the final content into a new format before being rendered to the user.

Abstract

A system comprises a receiver to receive a main content. The system further comprises a detector to detect a placeholder in the main content for placement of a media object. Further, the system comprises an obtainer to obtain a plurality of media objects having placement attributes corresponding to the placeholder in the main content, where a placement attribute is indicative of characteristics of a media object compatible with the placeholder. The system further comprises a selector to select one of the plurality of media objects for being placed in the placeholder of the main content, based on a user profile. Further, the system comprises a generator to generate a final content indicative of the selected media object embedded in the main content.

Description

    PRIORITY CLAIM
  • This application claims priority to European patent application number EP18157254.6 filed on Feb. 16, 2018, the disclosure of which is incorporated by reference in its entirety herein.
  • BACKGROUND
  • With the growing competition in the market place, it is getting difficult for organizations to establish and maintain their position in their sector. In order to stay ahead in the race, organizations are using various marketing strategies to reach out to customers. One such technique is depicting products and services by inserting media objects while streaming online content to viewers. Such content may include, but is not limited to, audio files, videos and images.
  • Such media objects to be inserted in a content may have to be selected based on various criteria, such as regulatory guidelines and historical data of a user. For example, in a video content showing an alcoholic beverage, the alcoholic beverage may be replaced with a non-alcoholic beverage in countries where displaying alcohol-based beverage is not allowed.
  • In recent times, targeted product display has gained widespread recognition. As is generally known, targeted product display refers to an approach where products and services are offered to an individual as the media objects in a content, based on preferences that can be determined from historical records of the user. Such a focused approach assists organizations to cater to consumers based on their specific preferences.
  • However, such an approach may require the generation of a number of versions of the same content with different media objects. For example, for each customer-specific variation, a new version of multi-media content having a user-specific display as a media object has to be generated. Owing to large consumer base, numerous versions of the same content with different media objects are generated and stored, which would need a large storage space. Resource consumption may also be excessive owing to continuous repetitive efforts required for generation of the versions. This may also lead to high costs and maintenance concerns considering handling for such a large amount of data. Moreover, handling of such a large amount of data may lead to slow content processing and inconvenience in viewing the content as well. Thus, modification of media objects in a product display may be expensive to implement and may require intensive CPU (Central Processing Unit) processing for implementation.
  • BRIEF DESCRIPTION OF DRAWINGS
  • Features of the present disclosure are illustrated by way of examples shown in the following figures. In the following figures, like numerals indicate like elements, in which:
  • FIG. 1 illustrates a block diagram of a system, according to an example embodiment of the present disclosure;
  • FIG. 2 illustrates another block diagram depicting functionalities of the system, according to another example embodiment of the present disclosure.
  • FIG. 3 illustrates moving location of a placeholder in an animation, according an example embodiment of the present disclosure.
  • FIG. 4 illustrates a hardware platform for implementation of the system, according to an example embodiment of the present disclosure; and
  • FIG. 5 illustrates a computer-implemented method depicting functionality of the system, according to an example embodiment of the present disclosure.
  • SUMMARY
  • This summary is provided to introduce concepts related to dynamic generation of content with media objects. These concepts are further described below in the detailed description. This summary is not intended to identify essential features of the claimed subject matter nor is it intended for use in determining or limiting the scope of the claimed subject matter.
  • According to an embodiment of the present disclosure, a system is disclosed. The system comprises a receiver to receive a main content. The main content includes at least one of a still image, an audio content or a video content. The system further comprises a detector in communication with the receiver to detect at least one potential placeholder, hereinafter placeholder, in the main content for placement of a media object. The media object includes at least one of an audio file, a video file, an image, or a text. The placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. Further, the system comprises an obtainer in communication with the receiver and the detector. The obtainer is to obtain a plurality of media objects having placement attributes corresponding to the placeholder in the main content, where a placement attribute is indicative of characteristic(s) of a media object compatible with the placeholder, for instance to appropriately fit in the placeholder. The system further comprises a selector in communication with the receiver, the detector, and the obtainer. The selector is to select a media object from among the plurality of media objects for being placed in the placeholder of the main content, based on a user profile. Further, the system comprises a generator in communication with the receiver, the detector, the obtainer, and the selector. The generator is to generate a final content indicative of the selected media object embedded in the main content.
  • According to another embodiment of the present disclosure, a system is disclosed. The system comprises a receiver to receive a main content. The main content includes at least one of a still image, an audio content or a video content. The system further comprises a detector in communication with the receiver to detect a placeholder in the main content for placement of a media object. The media object includes at least one of an audio file, a video file, an image, or a text. A placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. Further, the system comprises an obtainer in communication with the receiver and the detector to obtain a plurality of media objects having placement attributes corresponding to the placeholder in the main content, wherein a placement attribute is indicative of characteristics of a media object to fit in the placeholder. The system further comprises a selector in communication with the receiver, the detector, and the obtainer. The selector is to provide the plurality of media objects to a user. The selector further is to receive an instruction from the user, the instruction being indicative of selection of a media object, from among the plurality of media objects, for being placed in the placeholder of the main content. Further, the system comprises a generator in communication with the receiver, the detector, the obtainer, and the selector. The generator is to generate a final content indicative of the selected media object embedded in the main content.
  • According to another embodiment of the present disclosure, a computer-implemented method executed by at least one processor is disclosed. The method comprises receiving a main content, where the main content includes at least one of a still image, an audio content or a video content. The method further comprises detecting a placeholder in the main content for placement of a media object. The media object includes at least one of an audio file, a video file, an image, or a text. A placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. Further, the method comprises obtaining a plurality of media objects having placement attributes corresponding to the placeholder in the main content, where a placement attribute is indicative of characteristics of a media object to fit in the placeholder. Further, the method comprises selecting one of the plurality of media objects for being placed in the placeholder of the main content, based on a user profile. The method further comprises generating a final content indicative of the selected media object embedded in the main content.
  • Other and further aspects and features of the disclosure will be evident from reading the following detailed description of the embodiments, which are intended to illustrate, not limit, the present disclosure.
  • DETAILED DESCRIPTION
  • For simplicity and illustrative purposes, the present disclosure is described by referring mainly to examples thereof. The examples of the present disclosure described herein may be used together in different combinations. In the following description, details are set forth in order to provide an understanding of the present disclosure. It will be readily apparent however, that the present disclosure may be practiced without limitation to all these details. Also, throughout the present disclosure, the terms “a” and “an” are intended to denote at least one of a particular element. As used herein, the term “includes” means includes but not limited to, the term “including” means including but not limited to. The term “based on” means based at least in part on.
  • In the realm of Internet marketing, advertisements for products and/or services may be rendered to users when they are streaming multimedia content such as, for example, a video file, an audio file, a still image, or any combination thereof, over the Internet. Different advertisements for different products and/or services may be rendered to different users based on a variety of factors, at various times. In one example technique, media objects associated with products and/or services may be added and/or replaced within the multimedia content before/during presentation of the multimedia content to the user, in order to render advertisements to a user.
  • When determining which advertisements to render, it may be appropriate to consider factors such as user preferences and/or government regulations, which may dictate what products and/or services may be advertised in multimedia content in a geographical area. Based on user preferences and/or government regulations, a system may determine to insert different media objects in the multimedia content. Therefore, the multimedia content may be rendered to users with different media objects added and/or replaced therein in different geographical areas.
  • The abovementioned technique of adding and/or replacing media objects in multimedia content based on different factors may be used in targeted advertisements. Advertisements using targeted advertising techniques are generally rendered to users when they are streaming content such as, for example, movies and images, over the Internet. Target advertising involves identifying potential customers based on user data associated therewith. The user data associated with a user is indicative of preferences of the user. Based on the user data, the preferences of the user may be determined and, accordingly, selected advertisements can be rendered to the user. The goal is to increase the probability of the customer buying the advertised product or service because the product or service is related to the customer's preference. One or ordinary skill in the art will appreciate that while the present disclosure discusses the features associated with adding and/or replacing media objects in multimedia content to create targeted advertisements, the disclosed techniques may be used in other domains as well. For example, the disclosed techniques may be used to distribute multimedia content that raises social awareness about one or more issues, where the content is modified based on factors such as geographical region, cultural norms, regulatory guidelines, and the like.
  • In example targeted multimedia techniques, multiple versions of a single content may be generated and stored. Each of the multiple versions includes one or more media objects related to a specific user preference. On learning about a preference of the user, a version of the content most specific to the preference of the user can be delivered to that user.
  • As an example, a first version of a video A may include media objects related to a user preference, say cars. For instance, the first version may include a media object related to a car and another media object related to a car cleaning service. In another example, a second version of the video A may include media objects related to a user preference, say, apparel shopping. In this example, the second version of the video A may include a media object related to an online shopping portal and another advertisement to a clothing brand. As can be observed, different versions of the same video A include media objects related to different user preferences. Now, when a user who has an interest in cars, seeks to watch the video A, the version of the video A that includes the media objects related to cars is rendered to the user.
  • As may be gathered, generation and storage of the multiple versions for the same content is a resource intensive task. For instance, repeated processing operations are to be performed for generating the multiple versions. Furthermore, given the considerable extent and variety of possible user preferences, a substantial number of versions may need to be created for the content thereby consuming significant amounts of storage space.
  • According to aspects of the present disclosure, a system for dynamic generation of content with media objects is described. In an embodiment, the system receives a main content such as, for example, a still image, an audio content, and a video content. On receiving the main content, the system is to detect a placeholder in the main content for placement of a media object. In an example, the placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. The media object may include an audio file, a video file, an image, and/or text. Further, the media object may be rendered as an advertisement to a user.
  • In an example, the system further obtains a plurality of media objects having placement attributes corresponding to the placeholder in the main content. As used herein, a placement attribute is indicative of characteristics of a media object compatible with the placeholder such as, for example, to fit in the placeholder.
  • Upon obtaining the plurality of media objects, the system is to select one of the plurality of media objects for being placed in the placeholder of the main content. In an example embodiment, the media object to be placed in the placeholder is selected based on a user profile. In an alternate example embodiment, the system is to provide the plurality of media objects to the user. Subsequently, the system is to receive an instruction indicative of selection of a media object for being placed in the placeholder of the main content from the user. Based on the selected media object, the system generates a final content indicative of the selected media object embedded in the main content.
  • The system of the present disclosure offers a comprehensive and time-effective approach for dynamic generation of content with media objects. The proposed approach averts a need to generate and store of multiple versions of the content. As a result, processing load and usage of storage space is reduced. Furthermore, placement of suitable media objects in the placeholder produces more effective advertisements. Further, the system offers multiple techniques for selection of the media object to be embedded in the main content. Therefore, the system and the method of the present disclosure offer a comprehensive, efficient, and time-effective dynamic generation of the content with the media objects.
  • FIG. 1 illustrates a schematic view of a system 100 for dynamic generation of content with media objects, according to an example of the present disclosure. In one example embodiment, the content may include at least one of a still image, an audio content or a video content. The system 100 may include a receiver 102, a detector 104, an obtainer 106, a selector 108, a generator 110, a renderer 112, and a converter 114.
  • In an example embodiment, the detector 104 may be in communication with the receiver 102. The obtainer 106 may be in communication with the receiver 102 and the detector 104. The selector 108 may be in communication with the receiver 102, the detector 104, and the obtainer 106. The generator 110 may be in communication with the receiver 102, the detector 104, the obtainer 106, and the selector 108. The renderer 112 and the converter 114 may be in communication with generator 110.
  • In an example embodiment, the receiver 102 may receive the content also referred to as main content. The main content may include, but is not limited to, the still image, the audio content, and the video content. In an example embodiment, the audio content may be encoded in an Advanced Audio Coding (AAC) format, an MP3 format, or an OGG format. Similarly, the video content may be encoded in an MPEG2 format or an X264 format. Furthermore, the still image may be encoded in a PNG format or a JPEG format. One of ordinary skill in the art will appreciate that the main content may be encoded in other formats not disclosed above without departing from the scope of the disclosure.
  • In an example embodiment, the receiver 102 may receive metadata associated with the main content as well. Such metadata may include, but is not limited to, menus, chapter, and subtitles of the main content. The content may be received from a data repository (not shown), which may be an internal data repository or an external data repository.
  • Once the main content is received, the detector 104 may detect one or more placeholders in the main content for placement of media objects. A placeholder is indicative of a position in the main content for placing a media object. The placeholder in the main content may be defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. Furthermore, the media object may include, but is not limited to, an audio file, a video file, an image, or a text file.
  • In an example embodiment, when the main content is the audio content, the placeholder for the placement of the media object is defined based on the time stamp. For example, the placeholder may be defined as a spot in the audio between a time stamp 4:30:00 minutes to a time stamp 4:30:08 minutes. Therefore, the main content, i.e., the audio content, has a placeholder of 8 seconds between the mentioned duration for placement of a media object.
  • In an example embodiment, when the main content is the video content, the placeholder may be defined based on the frame range. For example, the placeholder may be defined as a spot in the video between the 50th frame and 53rd frame. Therefore, the main content, i.e., the video content, has a placeholder for placing the media object for a duration of 4 frames. In another example embodiment with the main content being the video content, the placeholder may be a reference area existing through one or more frames of the video content. For example, the placeholder may be a surface of a drink can held by an actor in the video content.
  • In an example embodiment, when the main content is the still image, the placeholder may be defined based on the reference area. For example, if the still image depicts a bus, the placeholder may be defined as a side of a bus in the still image.
  • Following the detection of the placeholders, the obtainer 106 may obtain a plurality of media objects having placement attributes corresponding to a placeholder in the main content. A placement attribute is indicative of characteristics of a media object to fit in the placeholder. The placement attributes may include, but are not limited to, dimensions of the media object and a play duration of the media object.
  • For example, in case of the main content being the audio content and the placeholder being of a duration of 6 seconds, the obtainer 106 may obtain media objects that can fit within the duration of 6 seconds. In an example embodiment, the obtainer 106 may obtain one media object with the play duration of 6 seconds. In another example embodiment, the obtainer 106 may obtain two media objects with collective play durations of 6 seconds. In an example embodiment, the obtainer 106 may obtain the media objects from the data repository as explained earlier. In another example embodiment, the obtainer 106 may obtain the media objects from an object data repository (not shown) that is independent of the data repository of the main content.
  • After obtaining the plurality of media objects by the obtainer 106, the selector 108 may select one of the plurality of media objects for being placed in the placeholder of the main content. In an example embodiment, the selector 108 may select a media object to be placed in the placeholder, based on a user profile. The user profile is indicative of preferences of a user with respect to viewing of content. A profile of the user may be maintained based on historical usage data of the user. Specifically, the system 100 may keep a record of activities of the user and predict preferences of the user accordingly.
  • In an alternate example embodiment, the selector 108 may provide the plurality of media objects obtained by the obtainer 106 to the user. The selector 108 may provide the plurality of media objects to the user in form of a list through a Graphical User Interface (GUI) (not shown). The list may be shown on a multimedia device (not shown) used by the user. The multimedia device may include, but is not limited to, a personal computing device, a smart phone, a laptop, an infotainment system installed in an automobile, an in-flight infotainment system, and a smart television.
  • In response to providing of the plurality of media objects, the selector 108 may receive an instruction from the user. The instruction is indicative of selection of a media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
  • Further, in continuation with selection of the media object based on the user profile or the receipt of the instruction from the user, the generator 110 may generate a final content. The final content is indicative of the selected media object embedded in the main content.
  • In an example embodiment, the generator 110 may integrate at least one content enhancement effect to the selected media object embedded in the main content. The content enhancement effect may include, but is not limited to, a blur effect, a sharpness effect, a saturation effect, a brightness effect, a hue effect, and a contrast effect.
  • Further, the renderer 112 may render the final content to the user. In an alternate embodiment, the final content may not be in a format that can be rendered by the renderer 112 to the user. In such an example embodiment, the converter 114 may convert the final content into a new format. In an example embodiment, the converter 114 may detect whether the final content has to be converted or not, based on specification of a software or a hardware available for rendering the final content. In an example embodiment, the software may invoke a request for conversion into the new format. In another example embodiment, the user may invoke conversion of the final content into the new format. Subsequently, the converter 114 may render the final content to the user in the new format. In an example embodiment, the renderer 112 may render the final content to the user through a hardware device, such as a monitor and a speaker. In another example embodiment, the renderer 112 may be rendered in form of a data stream, a radio broadcast, and a file, without departing from the scope of the present disclosure.
  • FIG. 2 illustrates another block diagram depicting functionalities of the system 100, according to another example embodiment of the present disclosure. The components of the system 100 are already explained in detail in the description of FIG. 1. FIG. 2 is provided to provide a more detailed understanding and clarity of the present disclosure, and therefore, should not be construed as limiting. For the sake of brevity, features of the present disclosure that are already explained in the description of FIG. 1 are not explained in detail in the description of FIG. 2.
  • As shown, the system 100 may process the main content 202 based on dynamic composition description details 204 and a list of media objects 206, e.g., the media objects, to output dynamically generated content, i.e., the final content 208. In one example embodiment, the dynamic composition description details 204 may include compositional details of the main content 202. The compositional details may be described in a flat file. The format of the flat file may include, but is not limited to, a JSON format, an XML format, and a TXT format.
  • The compositional details may include operational details of the main content 202. In an example embodiment, when the main content is the audio content, the operational details may include customization of sentences, music, and sounds in an audio track. In another example embodiment, when the main content is the still image, the operational details may include insertion of images, text, and 3D objects in the image. In yet another example embodiment, when the main content is the video content, the operational details may include insertion of images, text, 3D objects, and other videos in the video content.
  • Further, the dynamic composition description details 204 may include details pertaining to the detected placeholder in the main content 202. As explained earlier, the placeholder is defined based on at least one of the timestamp, the time range, the frame range, and the reference area in the main content 202.
  • In case of the main content 202 being an animation, the system 100 may detect a location of the placeholder based on different coordinates over time dimension. Such coordinates may be defined on key frames. A key frame is indicative of a frame of the animation that is being used as reference for locating the placeholder. In an example embodiment, the system 100 may define two key frames, a first key frame and a second key frame. Specifically, a first frame and a twelfth frame of the animation may be defined as the first key frame and the second key frame, respectively.
  • In one example embodiment, the list of media objects 206 may be an organized data structure containing descriptions of media objects. The list of media objects 206 may alternatively be an organized data structure containing descriptions of media objects and copies of the media objects. The list of media objects 206 may also or alternatively include corresponding placement attributes. In an example embodiment, the placement attributes may include a retrieval path of a corresponding media object. The retrieval path may be located either locally on an internal or an external drive, for example, a hard drive, or in one of a memory or a cache. In an example embodiment, the retrieval path may be located online where it may be accessible by a Unique Resource Location (URL).
  • In yet another embodiment, system 100 may be embodied in the form of a user application for content playback (e.g., a standalone mobile application, or alternatively, a plug-in or extension for an Internet browser). Receiver 102 may receive a multimedia container 201 containing main content 202, dynamic composition description details 204, and the list of media objects 206. In an example embodiment, renderer 112 may render the final content at a back end server separate from a system displaying the final content to a user. Alternatively, renderer 112 may render the final content locally on the system displaying the final content. Rendering the final content locally may be beneficial in cases where no Internet connection is available and also in cases where the system displaying the final content has enough CPU power and memory to render the final content itself.
  • In an example embodiment, when the final content is rendered locally, the rendering may be done dynamically based on playback of the content. Specifically, the multimedia container 201 containing main content 202, dynamic composition description details 204, and the list of media objects 206 may be downloaded on a local machine such as a mobile device, smart TV or personal computer. Furthermore, as the main content 202 is being played, detector 104 may detect one or more placeholders in the main content 202 based on the dynamic composition description details 204. Obtainer 106 may obtain the list of media objects 206 and selector 108 may select the appropriate media object to add or replace into the main content 202 either by itself or based on information received from an external system or a user. All of the actions described above may occur in real time. Furthermore, renderer 112 may render the replacement/additional media objects at a predetermined time such as, for example, 1 minute before playback and render the replacement/additional media objects in the background. Alternatively, a targeted version of the content may be rendered upon receipt of the multimedia container 201 and then included in the playback of the final content.
  • FIG. 3 illustrates moving location of a placeholder 302 in an animation, according to one or more example embodiments of the present disclosure. For the sake of brevity, features of the present disclosure that are already explained in the description of FIG. 1 and FIG. 2 are not explained in the description of FIG. 3.
  • Block A in FIG. 3 illustrates the placeholder 302 being clearly visible in a key frame kf of the animation. The location of the placeholder 302 is defined by a zone (d, e, f, g) in the key frame kf. A vehicle 304 is also shown to be approaching the placeholder 302.
  • Block B in FIG. 3 illustrates a key frame kf+1 of the animation. As shown, in the present key frame, the vehicle 304 is driving past the placeholder 302. A portion of the vehicle 304 acting as a mask to the placeholder 302 is depicted by coordinates (k, i, j, k, l, m). The mask is defined by the coordinates.
  • Block C in FIG. 3 illustrates a key frame kf+2 of the animation, As shown, in the present frame, the portion of the vehicle 304 is acting as the mask to the placeholder 302 in coordinates (k′, l′, j′, k′, l′, m′).
  • Block D in FIG. 3 illustrates a key frame kf+3 of the animation. In the present key frame, the location of the placeholder 302 is now defined by a zone (d′, e′, f′, g′). The portion of the vehicle 304 is acting as the mask to the placeholder 302 in the coordinates (k′, l′, j′, k′, l′, , m′).
  • Referring back to FIG. 2, the list of media objects 206 may also include the corresponding placement attributes. In an example embodiment, the placement attributes include a retrieval path of a corresponding media object. The retrieval path may be located either locally on an internal or an external drive, for example, a hard drive, or in one of a memory or a cache. In an example embodiment, the retrieval path may be located online where it may be accessible by a Unique Resource Location (URL).
  • The system 100 may determine that multiple media objects 206 may be positioned in a placeholder. After the processing, the system 100 may receive a selection signal 210 indicative of the selection of one of the media objects 206 for being shown to the user along with the main content 202 as the final content 208.
  • The media object 206 may be selected based on the user profile or the user instruction. In one example embodiment, an external selector may determine the media object 206 to be embedded in the main content 202 for rendering or conversion. In another example embodiment, the external selector may share details pertaining to the user profile with the system 100. The system 100 may then determine the media object 206 to be embedded in the main content 202.
  • After the selection of the media object 206, the system 100 may retrieve the selected media object 206 based on the retrieval path disclosed in the corresponding placement attributes. Subsequently, the final content 208 may be rendered to the user.
  • In an example embodiment, the system 100 may generate multiple copies of the final content 208. In another example embodiment, the system 100 may share the final content 208 with user through any network or communication protocol. In one example embodiment, the system 100 may play the final content 208 for the user through a local multimedia application. In another example embodiment, the system 100 may stream the final content 208 for the user through a browser player.
  • In one example embodiment, the system 100 may render the final content 208 through an ad hoc player that dynamically generates the final content 208 based on the selection signal 210. In another example embodiment, the system 100 may generate the final content 208 in a common multimedia file format.
  • FIG. 4 illustrates a hardware platform 400 for implementation of the system 100, according to an example of the present disclosure. In an example embodiment, the hardware platform 400 may be a computer system 400 that may be used with the examples described herein. The computer system 400 may represent a computational platform that includes components that may be in a server or another computer system. The computer system 400 may execute, by a processor (e.g., a single or multiple processors) or other hardware processing circuit, the methods, functions and other processes described herein. These methods, functions and other processes may be embodied as machine readable instructions stored on a computer readable medium, which may be non-transitory, such as hardware storage devices (e.g., RAM (random access memory), ROM (read only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), hard drives, and flash memory).
  • The computer system 400 may include a processor 402 that may implement or execute machine readable instructions performing some or all of the methods, functions, techniques and/or other processes described herein. Commands and data from the processor 402 may be communicated over a communication bus 404. The computer system 400 may also include a main memory 406, such as a random access memory (RAM), where the machine readable instructions and data for the processor 402 may reside during runtime, and a secondary data storage 408, which may be non-volatile and stores machine readable instructions and data. The memory 406 and data storage 408 are examples of non-transitory computer readable mediums. The memory 406 and/or the secondary data storage may store data used by the system 100, such as an object repository including web objects, configuration data, test data, etc.
  • The computer system 400 may include an Input/Output (I/O) device 410, such as a keyboard, a mouse, a display, etc. A user interface (UI) 412 can be a communication device that provides textual and graphical user interfaces to a user of the system 100. The UI 412 may operate with I/O device 410 to accept from and provide data to a user. The computer system 400 may include a network interface 414 for connecting to a network. Other known electronic components may be added or substituted in the computer system. The processor 402 may be designated as a hardware processor. The processor 402 may execute various components of the system 100 described above and perform the methods described below.
  • FIG. 5 illustrates a computer-implemented method 500 depicting functionality of the system 100, according to an example of the present disclosure.
  • For the sake of brevity, construction and operational features of the system 100 which are explained in detail in the description of FIG. 1, FIG. 2, FIG. 3, and FIG. 4 are not explained in detail in the description of FIG. 5.
  • At step 501, the method 500 includes receiving a main content. In an example, the main content includes at least one of the still image, the audio content or the video content. In one example embodiment, the receiver 102 of the system 100 may receive the main content.
  • At step 502, a placeholder in the main content is detected for placement of a media object. The media object includes at least one of an audio file, a video file, an image, and/or a text file. The placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content. In one example embodiment, the detector 104 of the system 100 may detect the placeholder.
  • In one example embodiment, the placeholder is defined based on the timestamp when the main content is the audio content. In another example embodiment, the placeholder is defined based on the frame range when the main content is the video content. In yet another example embodiment, the placeholder is defined based on the reference area when the main content is the still image.
  • At step 503, a plurality of media objects having placement attributes corresponding to the placeholder in the main content is obtained. As explained earlier, the placement attribute is indicative of characteristics of a media object to fit in the placeholder. In one example embodiment, the obtainer 106 of the system 100 may obtain the plurality of media objects.
  • A step 504, one of the plurality of media objects is selected for being placed in the placeholder of the main content, based on the user profile. The user profile is indicative of preferences of the user with respect to viewing of the content, based on the historical usage data of the user. In an alternate example embodiment, the media object to be placed in the placeholder is selected based on a user selection. In such a case, a user may provide preference by way of a user instruction. In said example embodiment, the plurality of media objects is first provided to the user. Subsequently, an instruction from the user is received. The instruction is indicative of the selection of the media objects. In one example embodiment, the selector 108 of the system 100 may select one of the plurality of media objects.
  • At step 505, the final content indicative of the selected media object embedded in the main content is generated. In one example embodiment, the generator 110 of the system 100 may generate the final content. In an example embodiment, at least one content enhancement effect is integrated to the selected media object embedded in the main content. The at least one content enhancement effect includes the blur effect, the sharpness effect, the saturation effect, the brightness effect, the hue effect, and the contrast effect.
  • The final content is then rendered to a user. In an alternate embodiment, the method 500 includes converting the final content into a new format before being rendered to the user.
  • What has been described and illustrated herein is an example along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the spirit and scope of the subject matter, which is intended to be defined by the following claims and their equivalents in which all terms are meant in their broadest reasonable sense unless otherwise indicated.

Claims (20)

What is claimed is:
1. A system comprising:
a receiver to receive a main content, wherein the main content includes at least one of a still image, an audio content and a video content;
a detector in communication with the receiver to detect a placeholder in the main content for placement of a media object based on dynamic composition description details of the media content, wherein the placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content and wherein the dynamic composition description details include operational details of the media content;
an obtainer in communication with the receiver and the detector, the obtainer to obtain information regarding a plurality of media objects, each media object having a placement attribute corresponding to the placeholder in the main content, wherein the placement attribute is indicative of a characteristic of the media object compatible with the placeholder, and the placement attribute includes at least one of a dimension, a play duration, and a retrieval path of the media object;
a selector in communication with the receiver, the detector, and the obtainer, the selector to select a media object from among the plurality of media objects for being placed in the placeholder, based on one of a user profile and an instruction from a user; and
a generator in communication with the receiver, the detector, the obtainer, and the selector, the generator to generate a final content indicative of the selected media object embedded in the main content.
2. The system of claim 1 further comprising a renderer in communication with the generator to render the final content to a user.
3. The system of claim 1 further comprising a converter in communication with the generator to convert the final content into a format other than a format of the final content as generated by the generator, prior to rendering the final content to a user.
4. The system of claim 1, wherein the generator is further to integrate at least one content enhancement effect with the selected media object embedded in the main content, wherein the at least one content enhancement effect includes a blur effect, a sharpness effect, a saturation effect, a brightness effect, a hue effect, and a contrast effect.
5. The system of claim 1, wherein when the main content is the audio content, the placeholder in the main content is defined based on at least the timestamp, and wherein when the main content is the video content, the placeholder in the main content is defined based on at least the frame range, and wherein when the main content is the still image, the placeholder in the main content is defined based on at least the reference area.
6. The system of claim 1, wherein the user profile is indicative of a preference of a user with respect to viewing of content, the preference being obtained based on one of historical usage data of the user and a user instruction.
7. The system of claim 1, wherein:
the selector is to:
provide the plurality of media objects to the user; and
receive the instruction from the user indicative of selection of a media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
8. A computer-implemented method executed by at least one processor, the method comprising:
receiving a main content, wherein the main content includes at least one of a still image, an audio content or a video content;
detecting a placeholder in the main content for placement of a media object based on dynamic composition description details of the media content, the media object including at least one of an audio file, a video file, an image, and a text, wherein a placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content and wherein the dynamic composition description details include operational details of the media content;
obtaining a plurality of media objects, each media object having a placement attribute corresponding to the placeholder in the main content, wherein the placement attribute is indicative of a characteristic of the media object compatible with the placeholder, and the placement attribute includes at least one of a dimension, a play duration, and a retrieval path of the media object;
selecting a media object from among the plurality of media objects for being placed in the placeholder of the main content, based on one of a user profile and an instruction from a user; and
generating a final content indicative of the selected media object embedded in the main content.
9. The computer-implemented method of claim 8 further comprising rendering the final content to a user.
10. The computer-implemented method of claim 8 further comprising converting the final content into a format other than a format of the final content as generated by the generator, prior to rendering the final content to a user.
11. The computer-implemented method of claim 8 further comprising:
providing the plurality of media objects to a user; and
receiving an instruction from the user indicative of selection of the media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
12. The computer-implemented method of claim 8 further comprising integrating at least one content enhancement effect with the selected media object embedded in the main content, wherein the at least one content enhancement effect includes a blur effect, a sharpness effect, a saturation effect, a brightness effect, a hue effect, and a contrast effect.
13. The computer-implemented method of claim 8, wherein when the main content is the audio content, the placeholder in the main content is defined based on at least the timestamp, and wherein when the main content is the video content, the placeholder in the main content is defined based on at least the frame range, and wherein when the main content is the still image, the placeholder in the main content is defined based on at least the reference area.
14. The computer-implemented method of claim 8, wherein the user profile is indicative of a preference of a user with respect to viewing of content, the preference being obtained based on one of historical usage data of the user and a user instruction.
15. A computer-readable storage medium including instructions that, when executed by a processor, cause the processor to:
receive a main content, wherein the main content includes at least one of a still image, an audio content or a video content;
detect a placeholder in the main content for placement of a media object based on dynamic composition description details of the media content, the media object including at least one of an audio file, a video file, an image, and a text, wherein a placeholder is defined based on at least one of a timestamp, a time range, a frame range, and a reference area in the main content and wherein the dynamic composition description details include operational details of the media content;
obtain a plurality of media objects, each media object having a placement attribute corresponding to the placeholder in the main content, wherein the placement attribute is indicative of a characteristic of the media object compatible with the placeholder, and the placement attribute includes at least one of a dimension, a play duration, and a retrieval path of the media object;
select a media object from among the plurality of media objects for being placed in the placeholder of the main content, based on one of a user profile and an instruction from a user; and
generate a final content indicative of the selected media object embedded in the main content.
16. The computer-readable medium of claim 15 further including instructions that, when executed by the processor, cause the processor to render the final content to a user.
17. The computer-readable medium of claim 15 further including instructions that, when executed by the processor, cause the processor to convert the final content into a format other than a format of the final content as generated by the generator, prior to rendering the final content to a user.
18. The computer-readable medium of claim 15 further including instructions that, when executed by the processor, cause the processor to:
provide the plurality of media objects to a user; and
receive an instruction from the user indicative of selection of the media object, from among the plurality of media objects, for being placed in the placeholder of the main content.
19. The computer-readable medium of claim 15 further including instructions that, when executed by the processor, cause the processor to integrate at least one content enhancement effect with the selected media object embedded in the main content, wherein the at least one content enhancement effect includes a blur effect, a sharpness effect, a saturation effect, a brightness effect, a hue effect, and a contrast effect.
20. The computer-readable medium of claim 15 wherein the user profile is indicative of a preference of a user with respect to viewing of content, the preference being obtained based on one of historical usage data of the user and a user instruction.
US16/259,681 2018-02-16 2019-01-28 Dynamic content generation Active US11589125B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP18157254.6A EP3528196A1 (en) 2018-02-16 2018-02-16 Dynamic content generation
EP18157254 2018-02-16
EP18157254.6 2018-02-16

Publications (2)

Publication Number Publication Date
US20190261054A1 true US20190261054A1 (en) 2019-08-22
US11589125B2 US11589125B2 (en) 2023-02-21

Family

ID=61282969

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/259,681 Active US11589125B2 (en) 2018-02-16 2019-01-28 Dynamic content generation

Country Status (2)

Country Link
US (1) US11589125B2 (en)
EP (1) EP3528196A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230005199A1 (en) * 2018-09-04 2023-01-05 Dish Network L.L.C. Mini-Banner Content

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050188012A1 (en) * 2001-03-26 2005-08-25 Microsoft Corporation Methods and systems for synchronizing visualizations with audio streams
US20070162952A1 (en) * 2004-01-06 2007-07-12 Peter Steinborn Method and apparatus for performing synchronised audio and video presentation
US20070192782A1 (en) * 2004-08-09 2007-08-16 Arun Ramaswamy Methods and apparatus to monitor audio/visual content from various sources
US20130083859A1 (en) * 2011-10-04 2013-04-04 General Instrument Corporation Method to match input and output timestamps in a video encoder and advertisement inserter
US20130091519A1 (en) * 2006-11-23 2013-04-11 Mirriad Limited Processing and apparatus for advertising component placement
US20130169801A1 (en) * 2011-12-28 2013-07-04 Pelco, Inc. Visual Command Processing
US20140331334A1 (en) * 2013-05-01 2014-11-06 Konica Minolta, Inc. Display System, Display Method, Display Terminal and Non-Transitory Computer-Readable Recording Medium Stored With Display Program
US20170270360A1 (en) * 2016-03-16 2017-09-21 Wal-Mart Stores, Inc. System for Verifying Physical Object Absences From Assigned Regions Using Video Analytics
US20180218727A1 (en) * 2017-02-02 2018-08-02 Microsoft Technology Licensing, Llc Artificially generated speech for a communication session
US20190122698A1 (en) * 2017-10-24 2019-04-25 Adori Labs, Inc. Audio encoding for functional interactivity
US10299008B1 (en) * 2017-11-21 2019-05-21 International Business Machines Corporation Smart closed caption positioning system for video content
US20190158899A1 (en) * 2016-07-25 2019-05-23 Canon Kabushiki Kaisha Information processing apparatus, control method of the same, and storage medium
US10623789B1 (en) * 2011-03-14 2020-04-14 Vmware, Inc. Quality evaluation of multimedia delivery in cloud environments

Family Cites Families (57)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5892554A (en) 1995-11-28 1999-04-06 Princeton Video Image, Inc. System and method for inserting static and dynamic images into a live video broadcast
US5808695A (en) 1995-06-16 1998-09-15 Princeton Video Image, Inc. Method of tracking scene motion for live video insertion systems
US6100925A (en) 1996-11-27 2000-08-08 Princeton Video Image, Inc. Image insertion in video streams using a combination of physical sensors and pattern recognition
US8290351B2 (en) 2001-04-03 2012-10-16 Prime Research Alliance E., Inc. Alternative advertising in prerecorded media
US7908172B2 (en) 2000-03-09 2011-03-15 Impulse Radio Inc System and method for generating multimedia accompaniments to broadcast data
AU2001288552A1 (en) 2000-08-30 2002-03-13 Watchpoint Media, Inc. A method and apparatus for hyperlinking in a television broadcast
US9138644B2 (en) * 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US7882258B1 (en) * 2003-02-05 2011-02-01 Silver Screen Tele-Reality, Inc. System, method, and computer readable medium for creating a video clip
CA2424771A1 (en) 2003-04-07 2004-10-07 Buzz Technologies System and method for attaching advertisements to media files
FR2878674A1 (en) * 2004-12-01 2006-06-02 France Telecom Quality of service metrics dynamic adaptation method for e.g. telephony, involves updating ad hoc network nodes` routing table based on metric that is found based on quality of services available on network and/or requested by applications
US20070100698A1 (en) 2005-07-08 2007-05-03 Onestop Media Group Adaptive advertisements and adaptive advertising distribution system
US20080101456A1 (en) 2006-01-11 2008-05-01 Nokia Corporation Method for insertion and overlay of media content upon an underlying visual media
US9602884B1 (en) * 2006-05-19 2017-03-21 Universal Innovation Counsel, Inc. Creating customized programming content
US20080184287A1 (en) 2006-11-06 2008-07-31 Ken Lipscomb System and method for creating a customized video advertisement
EP2177010B1 (en) * 2006-12-13 2015-10-28 Quickplay Media Inc. Mobile media platform
US7739596B2 (en) * 2007-04-06 2010-06-15 Yahoo! Inc. Method and system for displaying contextual advertisements with media
US9609260B2 (en) 2007-07-13 2017-03-28 Gula Consulting Limited Liability Company Video tag layout
US8285121B2 (en) 2007-10-07 2012-10-09 Fall Front Wireless Ny, Llc Digital network-based video tagging system
US20110166925A1 (en) * 2007-11-04 2011-07-07 BeOnLine Technologies Private Limited Method and system for content scheduling on display media
WO2009101623A2 (en) * 2008-02-13 2009-08-20 Innovid Inc. Inserting interactive objects into video content
US20100064025A1 (en) 2008-09-10 2010-03-11 Nokia Corporation Method and Apparatus for Providing Media Service
US20100154007A1 (en) 2008-12-17 2010-06-17 Jean Touboul Embedded video advertising method and system
US10165286B2 (en) * 2009-07-08 2018-12-25 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
JP5801812B2 (en) 2009-09-11 2015-10-28 ディズニー エンタープライズ,インコーポレイテッド Virtual insert into 3D video
WO2011063513A1 (en) 2009-11-30 2011-06-03 Wing Donald J Real time media selection and creation of a composite multimedia file used for custom advertising and marketing
US20110177775A1 (en) 2010-01-13 2011-07-21 Qualcomm Incorporated Signaling mechanisms, templates and systems for creation and delivery of interactivity events on mobile devices in a mobile broadcast communication system
US20120240165A1 (en) * 2010-03-06 2012-09-20 Yang Pan Delivering Personalized Media Items to a User of Interactive Television by Using Scrolling Tickers in a Hierarchical Manner
US8739041B2 (en) * 2010-06-17 2014-05-27 Microsoft Corporation Extensible video insertion control
US8957920B2 (en) * 2010-06-25 2015-02-17 Microsoft Corporation Alternative semantics for zoom operations in a zoomable scene
US8439257B2 (en) * 2010-12-01 2013-05-14 Echostar Technologies L.L.C. User control of the display of matrix codes
US8534540B2 (en) * 2011-01-14 2013-09-17 Echostar Technologies L.L.C. 3-D matrix barcode presentation
US9930311B2 (en) 2011-10-20 2018-03-27 Geun Sik Jo System and method for annotating a video with advertising information
EP2774110A4 (en) 2011-11-02 2015-07-29 Michael Theodor Hoffman Systems and methods for dynamic digital product synthesis, commerce, and distribution
US9098378B2 (en) * 2012-01-31 2015-08-04 International Business Machines Corporation Computing reusable image components to minimize network bandwidth usage
WO2014002086A2 (en) 2012-06-26 2014-01-03 Eyeconit Ltd. Image mask providing a machine-readable data matrix code
US9058757B2 (en) * 2012-08-13 2015-06-16 Xerox Corporation Systems and methods for image or video personalization with selectable effects
JP6085029B2 (en) * 2012-08-31 2017-02-22 ドルビー ラボラトリーズ ライセンシング コーポレイション System for rendering and playing back audio based on objects in various listening environments
WO2014138305A1 (en) 2013-03-05 2014-09-12 Grusd Brandon Systems and methods for providing user interactions with media
US8910201B1 (en) 2013-03-11 2014-12-09 Amazon Technologies, Inc. Product placement in digital content
US9066048B2 (en) * 2013-06-17 2015-06-23 Spotify Ab System and method for switching between audio content while navigating through video streams
US20160212455A1 (en) 2013-09-25 2016-07-21 Intel Corporation Dynamic product placement in media content
GB2520311A (en) 2013-11-15 2015-05-20 Sony Corp A method, device and computer software
US10237628B2 (en) * 2014-02-03 2019-03-19 Oath Inc. Tracking and measurement enhancements in a real-time advertisement bidding system
US10375434B2 (en) 2014-03-11 2019-08-06 Amazon Technologies, Inc. Real-time rendering of targeted video content
US9414100B2 (en) * 2014-03-31 2016-08-09 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization
US10664687B2 (en) 2014-06-12 2020-05-26 Microsoft Technology Licensing, Llc Rule-based video importance analysis
US9491499B2 (en) * 2014-06-30 2016-11-08 Arjen Wagenaar Dynamic stitching module and protocol for personalized and targeted content streaming
US9646227B2 (en) 2014-07-29 2017-05-09 Microsoft Technology Licensing, Llc Computerized machine learning of interesting video sections
WO2016040833A1 (en) * 2014-09-12 2016-03-17 Kiswe Mobile Inc. Methods and apparatus for content interaction
US9852759B2 (en) * 2014-10-25 2017-12-26 Yieldmo, Inc. Methods for serving interactive content to a user
US9710712B2 (en) 2015-01-16 2017-07-18 Avigilon Fortress Corporation System and method for detecting, tracking, and classifiying objects
US10019415B1 (en) 2015-08-28 2018-07-10 Animoto Inc. System and method for consistent cross-platform text layout
CN105872602A (en) 2015-12-22 2016-08-17 乐视网信息技术(北京)股份有限公司 Advertisement data obtaining method, device and related system
US9424494B1 (en) 2016-01-28 2016-08-23 International Business Machines Corporation Pure convolutional neural network localization
US10009642B2 (en) * 2016-03-24 2018-06-26 Comcast Cable Communications Management, Llc Systems and methods for advertising continuity
WO2018058554A1 (en) * 2016-09-30 2018-04-05 Intel Corporation Face anti-spoofing using spatial and temporal convolutional neural network analysis
EP3682642A4 (en) * 2017-09-15 2021-03-17 Imagine Communications Corp. Systems and methods for production of fragmented video content

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050188012A1 (en) * 2001-03-26 2005-08-25 Microsoft Corporation Methods and systems for synchronizing visualizations with audio streams
US20070162952A1 (en) * 2004-01-06 2007-07-12 Peter Steinborn Method and apparatus for performing synchronised audio and video presentation
US20070192782A1 (en) * 2004-08-09 2007-08-16 Arun Ramaswamy Methods and apparatus to monitor audio/visual content from various sources
US20130091519A1 (en) * 2006-11-23 2013-04-11 Mirriad Limited Processing and apparatus for advertising component placement
US10623789B1 (en) * 2011-03-14 2020-04-14 Vmware, Inc. Quality evaluation of multimedia delivery in cloud environments
US20130083859A1 (en) * 2011-10-04 2013-04-04 General Instrument Corporation Method to match input and output timestamps in a video encoder and advertisement inserter
US20130169801A1 (en) * 2011-12-28 2013-07-04 Pelco, Inc. Visual Command Processing
US20140331334A1 (en) * 2013-05-01 2014-11-06 Konica Minolta, Inc. Display System, Display Method, Display Terminal and Non-Transitory Computer-Readable Recording Medium Stored With Display Program
US20170270360A1 (en) * 2016-03-16 2017-09-21 Wal-Mart Stores, Inc. System for Verifying Physical Object Absences From Assigned Regions Using Video Analytics
US20190158899A1 (en) * 2016-07-25 2019-05-23 Canon Kabushiki Kaisha Information processing apparatus, control method of the same, and storage medium
US20180218727A1 (en) * 2017-02-02 2018-08-02 Microsoft Technology Licensing, Llc Artificially generated speech for a communication session
US20190122698A1 (en) * 2017-10-24 2019-04-25 Adori Labs, Inc. Audio encoding for functional interactivity
US10299008B1 (en) * 2017-11-21 2019-05-21 International Business Machines Corporation Smart closed caption positioning system for video content

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230005199A1 (en) * 2018-09-04 2023-01-05 Dish Network L.L.C. Mini-Banner Content

Also Published As

Publication number Publication date
US11589125B2 (en) 2023-02-21
EP3528196A1 (en) 2019-08-21

Similar Documents

Publication Publication Date Title
US11778272B2 (en) Delivery of different services through different client devices
US20190333283A1 (en) Systems and methods for generating and presenting augmented video content
KR101652030B1 (en) Using viewing signals in targeted video advertising
US9888289B2 (en) Liquid overlay for video content
US8166500B2 (en) Systems and methods for generating interactive video content
US10701127B2 (en) Apparatus and method for supporting relationships associated with content provisioning
US20170105051A1 (en) Method and Apparatus for Increasing User Engagement with Video Advertisements and Content by Summarization
US10674230B2 (en) Interactive advertising and marketing system
US20080281689A1 (en) Embedded video player advertisement display
US9769544B1 (en) Presenting content with video content based on time
US20150319493A1 (en) Facilitating Commerce Related to Streamed Content Including Video
US20130312049A1 (en) Authoring, archiving, and delivering time-based interactive tv content
US20160119661A1 (en) On-Demand Metadata Insertion into Single-Stream Content
US9113215B1 (en) Interactive advertising and marketing system
US20170041648A1 (en) System and method for supplemental content selection and delivery
US20230269436A1 (en) Systems and methods for blending interactive applications with television programs
US11589125B2 (en) Dynamic content generation
US8595760B1 (en) System, method and computer program product for presenting an advertisement within content
US20150227970A1 (en) System and method for providing movie file embedded with advertisement movie
US20220038757A1 (en) System for Real Time Internet Protocol Content Integration, Prioritization and Distribution
KR102303753B1 (en) Method and apparatus for providing a content
US20100250386A1 (en) Method and system for personalizing online content
US20240070725A1 (en) Ecosystem for NFT Trading in Public Media Distribution Platforms
WO2017007751A1 (en) Interactive advertising and marketing method
US20100049805A1 (en) Selection and Delivery of Messages Based on an Association of Pervasive Technologies

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ACCENTURE GLOBAL SOLUTIONS LIMITED, IRELAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SOUCHE, CHRISTIAN;GATTONI, LUCIA;MATHON, EDOUARD;AND OTHERS;REEL/FRAME:048426/0046

Effective date: 20180219

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STCF Information on status: patent grant

Free format text: PATENTED CASE