US20210067844A1 - Cloud-Based Image Rendering for Video Stream Enrichment - Google Patents

Cloud-Based Image Rendering for Video Stream Enrichment Download PDF

Info

Publication number
US20210067844A1
US20210067844A1 US16/551,467 US201916551467A US2021067844A1 US 20210067844 A1 US20210067844 A1 US 20210067844A1 US 201916551467 A US201916551467 A US 201916551467A US 2021067844 A1 US2021067844 A1 US 2021067844A1
Authority
US
United States
Prior art keywords
video
interactive
computer
player
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/551,467
Other versions
US10924823B1 (en
Inventor
Evan A. BINDER
Marc Junyent MARTIN
Jordi Badia Pujol
Avner Swerdlow
Miquel Angel Farre Guiu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Disney Enterprises Inc
Original Assignee
Disney Enterprises Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US16/551,467 priority Critical patent/US10924823B1/en
Application filed by Disney Enterprises Inc filed Critical Disney Enterprises Inc
Assigned to DISNEY ENTERPRISES, INC. reassignment DISNEY ENTERPRISES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THE WALT DISNEY COMPANY (SWITZERLAND) GMBH
Assigned to THE WALT DISNEY COMPANY (SWITZERLAND) GMBH reassignment THE WALT DISNEY COMPANY (SWITZERLAND) GMBH ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MARTIN, MARC JUNYENT, PUJOL, JORDI BADIA, FARRE GUIU, MIQUEL ANGEL
Assigned to DISNEY ENTERPRISES, INC. reassignment DISNEY ENTERPRISES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Swerdlow, Avner, BINDER, EVAN A.
Priority to JP2020112131A priority patent/JP7063941B2/en
Priority to GB2009913.1A priority patent/GB2588271B/en
Priority to KR1020200082769A priority patent/KR102380620B1/en
Priority to CA3086239A priority patent/CA3086239C/en
Publication of US10924823B1 publication Critical patent/US10924823B1/en
Application granted granted Critical
Publication of US20210067844A1 publication Critical patent/US20210067844A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8545Content authoring for generating interactive applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • H04L65/607
    • H04L65/608
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2223Secondary servers, e.g. proxy server, cable television Head-end being a public access point, e.g. for downloading to or uploading from clients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25858Management of client data involving client software characteristics, e.g. OS identifier
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25891Management of end-user data being end-user preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4318Generation of visual interfaces for content selection or interaction; Content or additional data rendering by altering the content in the rendering process, e.g. blanking, blurring or masking an image region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4722End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for requesting additional data associated with the content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • H04N21/8173End-user applications, e.g. Web browser, game
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot

Definitions

  • Mobile streaming applications on most personal communication devices are typically not optimized for sophisticated graphical interactivity enabled by video game engines, but instead rely on native software stacks designed to support video streaming.
  • Another conventional solution is to create an entirely separate application for interactive experiences.
  • this potential solution risks product and brand confusion for users, and also results in increased costs associated with marketing and customer acquisition, as well as application support and maintenance.
  • mobile interactivity may be limited by the processing capacity of the mobile devices themselves, which tends to be significantly lower than that of gaming consoles or personal computers configured for gaming. This processing constraint may necessitate a distinct asset creation and rendering pipeline for creating a separate interactive application for use on mobile devices, which can undesirably result in considerable additional cost.
  • FIG. 1 shows a diagram of an exemplary system for performing cloud-based image rendering for video stream enrichment, according to one implementation
  • FIG. 2A shows a diagram of an exemplary video stream including customizable video segments, according to one implementation
  • FIG. 2B shows a diagram of an exemplary enriched video stream including customizable video segments having rendered video enhancements inserted therein, according to one implementation.
  • FIG. 3 shows a flowchart presenting an exemplary method for performing cloud-based image rendering for video stream enrichment, according to one implementation.
  • the present application discloses systems and methods for performing cloud-based image rendering for video stream enrichment that address and overcome the deficiencies in the conventional art.
  • the present video stream enrichment solution may be implemented as an automated process. It is noted that, as used in the present application, the term “automated” refers to systems and processes that do not require human intervention. Although, in some implementations, a human system administrator may review or even modify content enrichment determinations made by the systems and according to the methods described herein, that human involvement is optional.
  • the cloud-based video stream enrichment solution described in the present application may be performed under the control of hardware processing components of the disclosed systems.
  • FIG. 1 shows a diagram of exemplary system 100 for performing cloud-based image rendering for video stream enrichment, according to one implementation.
  • system 100 includes video forwarding unit 101 , video enrichment unit 102 , and one or more video rendering engine(s) 108 , such as video rendering servers, for example.
  • video enrichment unit 102 of system 100 may include hardware processor 104 and memory 106 implemented as a non-transitory storage device storing software code 110 .
  • memory 106 of video enrichment unit 102 may store user profile database 112 including user profiles 124 a , 124 b , and 124 c (hereinafter “user profiles 124 a - 124 c ”).
  • System 100 is implemented in a video distribution environment including content provider 114 , one or more non-interactive video players 120 (hereinafter “non-interactive video player(s) 120 ”), one or more interactive video players 122 a , 122 b , and 122 c (hereinafter “interactive video player(s) 122 a - 122 c ”), and communication network 116 having communication links 118 linking the elements of system 100 to one another, as well as to content provider 114 , non-interactive video player(s) 120 , and interactive video player(s) 122 a - 122 c .
  • FIG. 1 shows video stream 130 provided to system 100 by content provider 114 and forwarded to non-interactive video player(s) 120 and video enrichment unit 102 , one or more rendered video enhancements 138 (hereinafter “rendered video enhancement(s) 138 ”) obtained by video enrichment unit 102 from video rendering engine(s) 108 , and enriched video streams 140 a and 140 b including rendered video enhancement(s) 138 distributed by video enrichment unit 102 . Also shown in FIG.
  • rendered video enhancements 138 hereinafter “rendered video enhancement(s) 138 ”
  • FIG. 1 are user input data 126 a , 126 b , and 126 c received by video enrichment unit 102 from respective interactive video player(s) 122 a - 122 c , and lightly enhanced video stream 142 optionally distributed to one or more of non-interactive video player(s) 120 .
  • memory 106 may take the form of any computer-readable non-transitory storage medium.
  • computer-readable non-transitory storage medium refers to any medium, excluding a carrier wave or other transitory signal that provides instructions to a hardware processor of a computing platform, such as hardware processor 104 of video enrichment unit 102 .
  • a computer-readable non-transitory medium may correspond to various types of media, such as volatile media and non-volatile media, for example.
  • Volatile media may include dynamic memory, such as dynamic random access memory (dynamic RAM), while non-volatile memory may include optical, magnetic, or electrostatic storage devices.
  • dynamic RAM dynamic random access memory
  • non-volatile memory may include optical, magnetic, or electrostatic storage devices.
  • Common forms of computer-readable non-transitory media include, for example, optical discs, RAM, programmable read-only memory (PROM), erasable PROM (EPROM), and FLASH memory.
  • system 100 may include one or more computing platforms, such as computer servers for example, which may be co-located, or may be part of an interactively linked but distributed system.
  • video forwarding unit 101 , video enrichment unit 102 , and video rendering engine(s) 108 may be part of cloud-based distributed system 100 interactively linked by communication network 116 and network communication links 118 .
  • each of video forwarding unit 101 , video enrichment unit 102 , video rendering engine(s) 108 , and user profile database 112 may be remote from one another within the distributed resources of cloud-based system 100 .
  • interactive video player(s) 122 a - 122 c are shown respectively as smartphone 122 a , laptop computer 122 b , and tablet computer 122 c , in FIG. 1 , those representations are provided merely by way of example. More generally, interactive video player(s) 122 a - 122 c may be any suitable mobile or stationary devices or systems that implement data processing capabilities sufficient to support connections to communication network 116 , and implement the functionality ascribed to interactive video player(s) 122 a - 122 c herein. For example, in other implementations, interactive video player(s) 122 a - 122 c may take the form of a desktop computer, a smart television (smart TV), or a gaming console.
  • the feature “interactive video player” refers to a video player including a user interface, such as a web browser for example, supporting user feedback enabling a user of the interactive video player to actively participate in the action depicted by playback of a video stream by the interactive video player.
  • a user interface such as a web browser for example, supporting user feedback enabling a user of the interactive video player to actively participate in the action depicted by playback of a video stream by the interactive video player.
  • an interactive video player may enable a user viewing a sporting event to provide inputs that change the viewpoint of the action and show three-dimensional (3D) renderings of the playing field and athletes from a perspective selected by the user.
  • the interactive video player may enable the user to request special imagery conforming to their personal preferences, such as a 3D overlay showing statistics for one or more athletes on a fantasy sports team of the user.
  • an interactive video player displaying a movie to a user may enable the user to take control of an object or character appearing in a particular segment of the movie. For instance, in a customizable video segment of a movie showing an automobile race, a user of the interactive video player playing back the movie may take control of one of the race vehicles to become a virtual participant in the auto race.
  • non-interactive video player(s) 120 provide only basic user controls, such as “play,” “pause,” “stop,” “rewind,” and “fast forward,” as well as volume control, thereby enabling only passive consumption of the content included in video stream 130 by content provider 114 , or lightly enhanced video stream 142 . That is to say, non-interactive video player(s) 120 may be configured to merely display the base content and graphical overlays selected by content provider 114 and included in video stream 130 forwarded to non-interactive video player(s) 120 by video forwarding unit 101 of system 100 .
  • content provider 114 may be a media entity providing TV content as video stream 130 .
  • Video stream 130 may be a linear TV program stream, for example, including an ultra high-definition (ultra HD), high-definition (HD), or standard-definition (SD) baseband video signal with embedded audio, captions, time code, and other ancillary metadata, such as ratings and/or parental guidelines.
  • video stream 130 may include multiple audio tracks, and may utilize secondary audio programming (SAP) and/or Descriptive Video Service (DVS), for example.
  • SAP secondary audio programming
  • DVDS Descriptive Video Service
  • Video stream 130 may include the same source video content broadcast to a traditional TV audience using a TV broadcasting platform (not shown in FIG. 1 ), which may include a conventional cable and/or satellite network, for example.
  • content provider 114 may find it advantageous or desirable to make content included in video stream 130 available via an alternative distribution channel, such as communication network 116 , which may include a packet-switched network, for example, such as the Internet.
  • video forwarding unit 101 detects non-interactive video player(s) 120 linked to video forwarding unit 101 over communication network 116 , forwards video stream 130 to non-interactive video player(s) 120 , and forwards the same video stream 130 to video enrichment unit 102 over communication network 116 .
  • Video enrichment unit 102 receives video stream 130 from video forwarding unit 101 , detects interactive video player(s) 122 a - 122 c linked to video enrichment unit 102 over communication network 116 , and identifies a video enhancement corresponding to one or more customizable video segments in video stream 130 .
  • video enrichment unit 102 inserts the video enhancement as rendered video enhancement(s) 138 obtained from rendering engine(s) 108 into the one or more customizable video segments to produce enriched video stream 140 a and/or 140 b , and distributes the enriched video stream or streams to one or more of interactive video player(s) 122 a - 122 c .
  • the video enhancement(s) 138 are not rendered on interactive video player(s) 122 a - 122 c .
  • the video enhancement(s) are rendered by remote rendering engine(s) 108 in the cloud-based distributed system 100 .
  • Video enrichment unit 102 of system 100 is configured to distribute enriched video stream 140 a and/or 140 b substantially in real-time with respect to receiving video stream 130 from video forwarding unit 101 . Moreover, in some implementations, system 100 is configured to distribute video stream 130 or lightly enhanced video stream 142 to non-interactive video player(s) 120 and to distribute enriched video stream 140 a and/or 140 b to interactive video player(s) 122 a - 122 c substantially concurrently.
  • the same base video stream 130 can advantageously be used to distribute enriched video content to interactive video player(s) 122 a - 122 c capable of enabling interactive user feedback, as well as to distribute non-enriched video content to non-interactive video player(s) 120 lacking that functionality.
  • the non-enriched video streams distributed to non-interactive video player(s) 120 may provide the content included in video stream 130 by content provider 114 , without enhancement or customization.
  • the customizable video segments included in video stream 130 provided to non-interactive video player(s) 120 may include only the generic and predetermined content inserted into video stream 130 by content provider 114 .
  • video stream 130 may include some basic enhancements and be distributed to one or more of non-interactive video player(s) 120 as lightly enhanced video stream 142 by video enrichment unit 102 .
  • video stream 130 is a broadcast of a sporting event
  • one of user profiles 124 a - 124 c corresponding to a user of one of non-interactive video player(s) 120 includes data identifying a favorite team of that user
  • team colors and/or a team logo may be included in lightly enhanced video stream 142 as basic graphical overlays.
  • FIG. 2A shows a more detailed diagram of an exemplary video stream including customizable video segments, according to one implementation.
  • video stream 230 includes one or more predetermined video segments 232 a , 232 b , and 232 c (hereinafter “predetermined video segments 232 a - 232 c ”) and one or more customizable video segments 234 a and 234 b .
  • predetermined video segments 232 a - 232 c hereinafter “predetermined video segments 232 a - 232 c ”
  • video stream 230 may include more or many more than three predetermined video segments 232 a - 232 c .
  • video stream 230 includes at least one of customizable video segments 234 a or 234 b , but may include more or many more than two customizable video segments.
  • Predetermined video segments 232 a - 232 c include content included in video stream 230 by content provider 114 and are not subject to change.
  • customizable video segments 234 a and 234 b include default content in video stream 230 that may be enriched, for example, by being enhanced or replaced by rendered video enhancement 138 in FIG. 1 .
  • Video stream 230 in FIG. 2A , corresponds in general to video stream 130 , in FIG. 1 . That is to say, video stream 130 may share any of the characteristics attributed to corresponding video stream 230 by the present disclosure, and vice versa. Consequently, although not shown in FIG. 1 , video stream 130 may include features corresponding respectively to predetermined video segments 232 a - 232 c and customizable video segments 234 a and 234 b.
  • FIG. 2B shows a more detailed diagram of exemplary enriched video stream 240 including customizable video segments 234 a and 234 b having respective rendered video enhancements 238 a and 238 b inserted therein, according to one implementation. It is noted that any feature in FIG. 2B identified by a reference number identical to one appearing in FIG. 2A corresponds respectively to that feature and may share any of the characteristics attributed to it above.
  • Enriched video stream 240 corresponds in general to either or both of enriched video streams 140 a and 140 b , in FIG. 1 . That is to say, enriched video streams 140 a and 140 b may share any of the characteristics attributed to corresponding enriched video stream 240 by the present disclosure, and vice versa. Consequently, although not shown in FIG. 1 , enriched video streams 140 a and 140 b may include features corresponding respectively to predetermined video segments 232 a - 232 c and customizable video segments 234 a and 234 b having respective rendered video enhancements 238 a and 238 b inserted therein.
  • rendered video enhancements 238 a and 238 b correspond in general to rendered video enhancement(s) 138 , in FIG. 1 .
  • rendered video enhancements 238 a and 238 b may share any of the characters attributed to rendered video enhancement 138 (s) by the present disclosure, and vice versa.
  • FIG. 3 shows flowchart 350 presenting an exemplary method for performing cloud-based image rendering for video stream enrichment, according to one implementation.
  • FIG. 3 shows flowchart 350 presenting an exemplary method for performing cloud-based image rendering for video stream enrichment, according to one implementation.
  • FIG. 3 it is noted that certain details and features have been left out of flowchart 350 in order not to obscure the discussion of the inventive features in the present application.
  • flowchart 350 begins with detecting, by video forwarding unit 101 of system 100 , non-interactive video player(s) 120 linked to video forwarding unit 101 over communication network 116 (action 351 ).
  • Non-interactive video player(s) 120 are video players that are not configured to receive or transmit user inputs enabling the user to actively participate in the action depicted by playback of a video stream.
  • non-interactive video player(s) 120 provide only basic user controls, such as “play,” “pause,” “stop,” “rewind,” and “fast forward,” as well as volume control, thereby enabling only passive consumption of the content included in video stream 130 / 230 or lightly enhanced video stream 142 .
  • Flowchart 350 continues with forwarding, by video forwarding unit 101 , video stream 130 / 230 to non-interactive video player(s) 120 (action 352 ).
  • video stream 130 / 230 is forwarded to non-interactive video player(s) 120 as an unenhanced video stream. That is to say, in those implementations, video stream 130 / 230 includes only the content received by video forwarding unit 101 from content provider 114 when it is forwarded to non-interactive video player(s) 120 .
  • video stream 130 / 230 may include some basic enhancements and be distributed to one or more of non-interactive video player(s) 120 as lightly enhanced video stream 142 by video enrichment unit 102 .
  • video stream 130 / 230 is a broadcast of a sporting event
  • one of user profiles 124 a - 124 c corresponding to a user of one of non-interactive video player(s) 120 includes data identifying a favorite team of that user
  • team colors and/or a team logo may be included in lightly enhanced video stream 142 as basic graphical overlays.
  • non-interactive video player(s) 120 may receive basic enhancements based on their geographical location.
  • non-interactive video player(s) 120 located in the state of Texas and receiving lightly enhanced video stream 142 of a professional football game may have team colors and/or team logos for both Texas based professional football franchises in lightly enhanced video stream 142 as graphical overlays.
  • the “light enhancements” included in lightly enhanced video stream 142 by video enrichment unit 102 may be distinguished from the video enhancements rendered as rendered video enhancement(s) 138 / 238 a / 238 b in that, by contrast to rendered video enhancements 138 / 238 a / 238 b , the light enhancements included in lightly enhanced video stream 142 neither require nor invite user feedback.
  • the light video enhancements included in lightly enhanced video stream 142 are typically limited to two-dimensional (2D) or 3D graphical overlays of team colors and/or logos, and/or player statistics provided for passive consumption by user(s) of non-interactive video player(s) 120 as an accompaniment to the content included in video stream 130 .
  • Flowchart 350 continues with forwarding, by video forwarding unit 101 , video stream 130 / 230 to video enrichment unit 102 (action 353 ), and receiving, by video enrichment unit 102 , video stream 130 / 230 from video forwarding unit 101 (action 354 ).
  • video stream 130 / 230 is forwarded to and received by video enhancement unit 102 as an unenhanced video stream. That is to say, in those implementations, video stream 130 / 230 includes only the content received by video forwarding unit 101 from content provider 114 when it is forwarded to video enhancement unit 102 .
  • Video stream 130 / 230 may be received by video enhancement unit 102 through use of software code 110 , executed by hardware processor 104 .
  • Flowchart 350 continues with identifying, by video enrichment unit 102 , a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130 / 230 (action 356 ).
  • Identification of a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b may be performed by software code 110 of video enrichment unit 102 , executed by hardware processor 104 , and may be based on any of several different criteria.
  • action 356 and the subsequent actions of flowchart 350 will be described by reference to interactive video player 122 a . However, it is noted that the present method is equally applicable to any of interactive video player(s) 122 a - 122 c.
  • user profile 124 a corresponds to a user of interactive video player 122 a
  • information stored in user profile 124 a can be used to identify a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130 / 230 .
  • a viewing history of the user, user preferences stored in user profile 124 a , or the user's age, gender, or location of residence may be used to identify a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130 / 230 .
  • information stored in user profile 124 a describing the interactive features and functionality of interactive video player 122 a may be used to identify a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b .
  • identification of a video enhancement in action 356 may be performed automatically, without user feedback.
  • the user of interactive video player 122 a may affirmatively provide user input data 126 a selecting or otherwise enabling identification of a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130 / 230 .
  • user input data 126 a may be solicited from the user of interactive video player 122 a .
  • User input data 126 a may identify a favorite sports team, athlete, movie, TV program, dramatic character, or actor of the user of interactive video player 122 a , to name a few examples.
  • flowchart 350 continues with inserting, by video enrichment unit 102 , rendered video enhancement(s) 138 / 238 a / 238 b into one or more of customizable video segments 234 a and 234 b to produce enriched video stream 140 / 240 (action 357 ). Insertion of rendered video enhancement(s) 138 / 238 a / 238 b into one or more of customizable video segments 234 a and 234 b may be performed by software code 110 of video enrichment unit 102 , executed by hardware processor 104 .
  • rendered video enhancement(s) 138 / 238 a / 238 b may take the form of an interactive quiz, game, or poll enabling multiple users to collaboratively participate in providing feedback and/or selecting additional video enhancements for rendering and insertion into one or more of customizable video segments 234 a and 234 b.
  • video enrichment unit 102 may obtain rendered video enhancement(s) 138 / 238 a / 238 b from one of video rendering engine(s) 108 communicatively coupled to video enrichment unit 102 .
  • video enrichment unit 102 may be communicatively coupled to video rendering engine(s) 108 via communication network 116 .
  • communication network 116 may be a packet-switched network, such as the Internet.
  • communication network 116 may include a broadband cellular network, such as a fourth generation cellular network (4G network), or a 5G network satisfying the IMT-2020 requirements established by the International Telecommunication Union (ITU), for example.
  • 4G network fourth generation cellular network
  • 5G network satisfying the IMT-2020 requirements established by the International Telecommunication Union (ITU), for example.
  • video enrichment unit 102 may identify a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in action 356 , but the rendering of that video enhancement to produce rendered video enhancement(s) 138 / 238 a / 238 b may be performed by one of video rendering engine(s) 108 .
  • video rendering engine(s) 108 may be configured to encode rendered video enhancement(s) 138 / 238 a / 238 b as one of Hypertext Transfer Protocol (HTTP) Live Stream (HLS) video, low-latency HLS (LHLS) video, or Dynamic Adaptive Streaming over HTTP (DASH) video, for example.
  • HTTP Hypertext Transfer Protocol
  • HLS Live Stream
  • LHLS low-latency HLS
  • DASH Dynamic Adaptive Streaming over HTTP
  • rendered video enhancement(s) 138 / 238 a / 238 b may include a metadata tag applied by video rendering engine(s) 108 that instructs interactive video player 122 a to enable interactive feedback by the user of interactive video player 122 a during playback of rendered video enhancement(s) 138 / 238 a / 238 b .
  • a metadata tag may take the form of an ID3 tag stored in rendered video enhancement(s) 138 / 238 a / 238 b by video rendering engine(s) 108 .
  • rendered video enhancement(s) 138 / 238 a / 238 b may be generated from video enhancement templates stored by the distributed memory resources of cloud-based system 100 and accessible by rendering engine(s) 108 .
  • Flowchart 350 can conclude with distributing, by video enrichment unit 102 , enriched video stream 140 a / 240 to one or more of interactive video player(s) 122 a - 122 c (action 358 ).
  • Distribution of enriched video stream 140 a / 240 in action 358 may be performed by software code 110 of video enrichment unit 102 , executed by hardware processor 104 , and via communication network 116 .
  • distribution of enriched video stream 140 a / 240 to one or more of interactive video player(s) 122 a - 122 c may occur in real-time with respect to video enrichment unit 102 receiving video stream 130 / 230 from video forwarding unit 101 .
  • enriched video stream 140 a / 240 may be produced and distributed to a single interactive video player, such as interactive video player 122 a .
  • more than one interactive video player may be used by a group of users sharing the experience of viewing video content together.
  • the same enriched video stream may be distributed to multiple interactive video players, as shown in FIG. 1 by enriched video stream 140 a being distributed to both of interactive video players 122 a and 122 b . That is to say, in some implementations, enriched video stream 140 a may be distributed to more than one but less than all of interactive video player(s) 122 a - 122 c linked to video enrichment unit 102 .
  • actions 351 through 358 may have their order rearranged, and/or some of actions 351 - 358 may be performed substantially in parallel.
  • actions 351 and 355 may be performed in parallel, while in another implementation, action 355 may be performed before actions 351 through 354 .
  • actions 352 and 353 may be performed in parallel.
  • actions 351 and 352 may be performed after action 358 .
  • actions 356 , 357 , and 358 may be performed in sequence for multiple different interactive video player(s) 122 a - 122 c concurrently. That is to say, concurrently with performance of actions 356 , 357 , and 358 of flowchart 350 for interactive video player 122 b and/or 122 a described above, those same actions may be performed by video enrichment unit 102 for one or more others of interactive video player(s) 122 a - 122 c .
  • video enrichment unit 102 may identify a different video enhancement corresponding to one or more of customizable video segments 234 a and 234 b for interactive video player 122 c , and may insert that different video enhancement as a different rendered video enhancement 138 / 238 a / 238 b into customizable video segment 234 a and/or 234 b to produce different enriched video stream 140 b / 240 .
  • Video enrichment unit 102 may then distribute different enriched video stream 140 b / 240 to interactive video player 122 c concurrently with distributing enriched video stream 140 a / 240 to interactive video player(s) 122 b and/or 122 a.
  • the present application discloses systems and methods for performing cloud-based image rendering for video stream enrichment.
  • the systems and methods disclosed herein augment what users of interactive video player(s) 122 a - 122 c are able to see when tuning into video stream 130 / 230 , without requiring the user to use a customized application that runs on their device.
  • graphical video enhancements may be rendered for the interactive video player on one of remote video rendering engine(s) 108 that is driven by user input data, such as user input data 126 a , 126 b , or 126 c.
  • the cloud-based video enrichment solution disclosed in the present application confers several benefits.
  • the present solution allows a user's interaction experience with a video stream to be augmented or otherwise enriched by content that could not ordinarily be rendered on their video player due to performance limitations. That is to say, because, according to the present inventive principles, video enhancements such as 3D graphics are remotely rendered and distributed as part of enriched video stream 140 a / 140 b / 240 , the only constraint on the quality of rendered video enhancement(s) 138 / 238 a / 238 b is the processing capability of cloud-based video rendering engine(s) 108 .
  • rendering of a video enhancement or enhancements identified in action 356 of flowchart 350 is not rendered by the interactive video player receiving that video enhancement.
  • the present solution supports the continued use of existing video streaming applications and existing web browser based video players while providing new and heretofore unavailable interactive experiences to users.
  • the same video stream is forwarded to the video forwarding unit 101 and the non-interactive video player(s) 120 , for example, from content provider 114 .
  • no additional software is required to view enriched video streams 140 a / 140 b / 240 or lightly enhanced video stream 142 in use cases in which rendered video enhancements 138 / 238 a / 238 b are encoded to conform to well established standards, such as HLS, LHLS, or DASH.
  • interactive views for the user of an interactive video player are not limited solely to 2D overlays, but may also include detailed 3D renderings of objects, rendered at a quality level that is unconstrained by the graphics processing capability of the user's video player (e.g., interactive video player(s) 122 a - 122 c ).
  • a video stream of a weather report could be enriched with 3D renderings of topography and visual effects for weather.
  • interactive video player(s) 122 a - 122 c with interactive capabilities advantageously receive enriched video stream 140 a / 140 b / 240 with rendered video enhancements 138 / 238 a / 238 b
  • non-interactive video player(s) 120 receive either standard static video stream 130 / 230 forwarded by video forwarding unit 101 or advantageously receive lightly enhanced video stream 142 from video enrichment unit 102 .

Abstract

According to one implementation, a cloud-based system for performing cloud-based image rendering for video stream enrichment includes a video forwarding unit and a video enrichment unit. The video forwarding unit is configured to detect one or more non-interactive video player(s) linked to the video forwarding unit over a communication network, forward a video stream to the non-interactive video player(s), and forward the video stream to the video enrichment unit. The video enrichment unit is configured to receive the video stream, detect one or more interactive video player(s) linked to the video enrichment unit over the communication network, identify a video enhancement corresponding to one or more customizable video segment(s) in the video stream, insert a rendered video enhancement into the one or more customizable video segment(s) to produce an enriched video stream, and distribute the enriched video stream to one or more of the interactive video player(s).

Description

    BACKGROUND
  • Mobile streaming applications on most personal communication devices, such as smartphones and tablet computers, for example, are typically not optimized for sophisticated graphical interactivity enabled by video game engines, but instead rely on native software stacks designed to support video streaming. As a result, it is often challenging to later include an interactive experience built for a game engine into a streaming application executed on a mobile device.
  • One conventional technique for including an interactive experience built for a game engine into a streaming application has been to develop a container solution that allows game engine based applications to reside within native video streaming applications built for a particular platform. However, containers are typically expensive to implement due to their relatively high initial build cost, a recurring cost for every feature or experience that is included in the container, and the additional processing overhead required to run each experience.
  • Another conventional solution is to create an entirely separate application for interactive experiences. However, this potential solution risks product and brand confusion for users, and also results in increased costs associated with marketing and customer acquisition, as well as application support and maintenance. Moreover, mobile interactivity may be limited by the processing capacity of the mobile devices themselves, which tends to be significantly lower than that of gaming consoles or personal computers configured for gaming. This processing constraint may necessitate a distinct asset creation and rendering pipeline for creating a separate interactive application for use on mobile devices, which can undesirably result in considerable additional cost.
  • SUMMARY
  • There are provided systems and methods for performing cloud-based image rendering for video stream enrichment, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a diagram of an exemplary system for performing cloud-based image rendering for video stream enrichment, according to one implementation;
  • FIG. 2A shows a diagram of an exemplary video stream including customizable video segments, according to one implementation;
  • FIG. 2B shows a diagram of an exemplary enriched video stream including customizable video segments having rendered video enhancements inserted therein, according to one implementation; and
  • FIG. 3 shows a flowchart presenting an exemplary method for performing cloud-based image rendering for video stream enrichment, according to one implementation.
  • DETAILED DESCRIPTION
  • The following description contains specific information pertaining to implementations in the present disclosure. One skilled in the art will recognize that the present disclosure may be implemented in a manner different from that specifically discussed herein. The drawings in the present application and their accompanying detailed description are directed to merely exemplary implementations. Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present application are generally not to scale, and are not intended to correspond to actual relative dimensions.
  • The present application discloses systems and methods for performing cloud-based image rendering for video stream enrichment that address and overcome the deficiencies in the conventional art. Moreover, the present video stream enrichment solution may be implemented as an automated process. It is noted that, as used in the present application, the term “automated” refers to systems and processes that do not require human intervention. Although, in some implementations, a human system administrator may review or even modify content enrichment determinations made by the systems and according to the methods described herein, that human involvement is optional. Thus, the cloud-based video stream enrichment solution described in the present application may be performed under the control of hardware processing components of the disclosed systems.
  • FIG. 1 shows a diagram of exemplary system 100 for performing cloud-based image rendering for video stream enrichment, according to one implementation. As shown in FIG. 1, system 100 includes video forwarding unit 101, video enrichment unit 102, and one or more video rendering engine(s) 108, such as video rendering servers, for example. As further shown in FIG. 1, video enrichment unit 102 of system 100 may include hardware processor 104 and memory 106 implemented as a non-transitory storage device storing software code 110. Moreover, in some implementations, as depicted in FIG. 1, memory 106 of video enrichment unit 102 may store user profile database 112 including user profiles 124 a, 124 b, and 124 c (hereinafter “user profiles 124 a-124 c”).
  • System 100 is implemented in a video distribution environment including content provider 114, one or more non-interactive video players 120 (hereinafter “non-interactive video player(s) 120”), one or more interactive video players 122 a, 122 b, and 122 c (hereinafter “interactive video player(s) 122 a-122 c”), and communication network 116 having communication links 118 linking the elements of system 100 to one another, as well as to content provider 114, non-interactive video player(s) 120, and interactive video player(s) 122 a-122 c. In addition, FIG. 1 shows video stream 130 provided to system 100 by content provider 114 and forwarded to non-interactive video player(s) 120 and video enrichment unit 102, one or more rendered video enhancements 138 (hereinafter “rendered video enhancement(s) 138”) obtained by video enrichment unit 102 from video rendering engine(s) 108, and enriched video streams 140 a and 140 b including rendered video enhancement(s) 138 distributed by video enrichment unit 102. Also shown in FIG. 1 are user input data 126 a, 126 b, and 126 c received by video enrichment unit 102 from respective interactive video player(s) 122 a-122 c, and lightly enhanced video stream 142 optionally distributed to one or more of non-interactive video player(s) 120.
  • With respect to the representation of system 100 shown in FIG. 1, it is noted that although software code 110 and user profile database 112 are depicted as being stored in memory 106 for conceptual clarity, more generally, memory 106 may take the form of any computer-readable non-transitory storage medium. The expression “computer-readable non-transitory storage medium,” as used in the present application, refers to any medium, excluding a carrier wave or other transitory signal that provides instructions to a hardware processor of a computing platform, such as hardware processor 104 of video enrichment unit 102. Thus, a computer-readable non-transitory medium may correspond to various types of media, such as volatile media and non-volatile media, for example. Volatile media may include dynamic memory, such as dynamic random access memory (dynamic RAM), while non-volatile memory may include optical, magnetic, or electrostatic storage devices. Common forms of computer-readable non-transitory media include, for example, optical discs, RAM, programmable read-only memory (PROM), erasable PROM (EPROM), and FLASH memory.
  • It is further noted that although FIG. 1 depicts software code 110 and user profile database 112 as being mutually co-located in memory 106, that representation is also merely provided as an aid to conceptual clarity. More generally, system 100 may include one or more computing platforms, such as computer servers for example, which may be co-located, or may be part of an interactively linked but distributed system. For example, in one implementation, video forwarding unit 101, video enrichment unit 102, and video rendering engine(s) 108 may be part of cloud-based distributed system 100 interactively linked by communication network 116 and network communication links 118. Thus, it is to be understood that each of video forwarding unit 101, video enrichment unit 102, video rendering engine(s) 108, and user profile database 112 may be remote from one another within the distributed resources of cloud-based system 100.
  • It is also noted that, although interactive video player(s) 122 a-122 c are shown respectively as smartphone 122 a, laptop computer 122 b, and tablet computer 122 c, in FIG. 1, those representations are provided merely by way of example. More generally, interactive video player(s) 122 a-122 c may be any suitable mobile or stationary devices or systems that implement data processing capabilities sufficient to support connections to communication network 116, and implement the functionality ascribed to interactive video player(s) 122 a-122 c herein. For example, in other implementations, interactive video player(s) 122 a-122 c may take the form of a desktop computer, a smart television (smart TV), or a gaming console.
  • As defined in the present application, the feature “interactive video player” refers to a video player including a user interface, such as a web browser for example, supporting user feedback enabling a user of the interactive video player to actively participate in the action depicted by playback of a video stream by the interactive video player. For example, an interactive video player may enable a user viewing a sporting event to provide inputs that change the viewpoint of the action and show three-dimensional (3D) renderings of the playing field and athletes from a perspective selected by the user. Moreover, in such an implementation, the interactive video player may enable the user to request special imagery conforming to their personal preferences, such as a 3D overlay showing statistics for one or more athletes on a fantasy sports team of the user.
  • As another example, an interactive video player displaying a movie to a user may enable the user to take control of an object or character appearing in a particular segment of the movie. For instance, in a customizable video segment of a movie showing an automobile race, a user of the interactive video player playing back the movie may take control of one of the race vehicles to become a virtual participant in the auto race.
  • By contrast, as defined in the present application, non-interactive video player(s) 120 provide only basic user controls, such as “play,” “pause,” “stop,” “rewind,” and “fast forward,” as well as volume control, thereby enabling only passive consumption of the content included in video stream 130 by content provider 114, or lightly enhanced video stream 142. That is to say, non-interactive video player(s) 120 may be configured to merely display the base content and graphical overlays selected by content provider 114 and included in video stream 130 forwarded to non-interactive video player(s) 120 by video forwarding unit 101 of system 100.
  • In one implementation, content provider 114 may be a media entity providing TV content as video stream 130. Video stream 130 may be a linear TV program stream, for example, including an ultra high-definition (ultra HD), high-definition (HD), or standard-definition (SD) baseband video signal with embedded audio, captions, time code, and other ancillary metadata, such as ratings and/or parental guidelines. In some implementations, video stream 130 may include multiple audio tracks, and may utilize secondary audio programming (SAP) and/or Descriptive Video Service (DVS), for example.
  • Video stream 130 may include the same source video content broadcast to a traditional TV audience using a TV broadcasting platform (not shown in FIG. 1), which may include a conventional cable and/or satellite network, for example. In addition, and as depicted in FIG. 1, content provider 114 may find it advantageous or desirable to make content included in video stream 130 available via an alternative distribution channel, such as communication network 116, which may include a packet-switched network, for example, such as the Internet.
  • According to the implementation of FIG. 1, video forwarding unit 101 detects non-interactive video player(s) 120 linked to video forwarding unit 101 over communication network 116, forwards video stream 130 to non-interactive video player(s) 120, and forwards the same video stream 130 to video enrichment unit 102 over communication network 116. Video enrichment unit 102 receives video stream 130 from video forwarding unit 101, detects interactive video player(s) 122 a-122 c linked to video enrichment unit 102 over communication network 116, and identifies a video enhancement corresponding to one or more customizable video segments in video stream 130. In addition, video enrichment unit 102 inserts the video enhancement as rendered video enhancement(s) 138 obtained from rendering engine(s) 108 into the one or more customizable video segments to produce enriched video stream 140 a and/or 140 b, and distributes the enriched video stream or streams to one or more of interactive video player(s) 122 a-122 c. In other words, the video enhancement(s) 138 are not rendered on interactive video player(s) 122 a-122 c. Instead, the video enhancement(s) are rendered by remote rendering engine(s) 108 in the cloud-based distributed system 100.
  • Video enrichment unit 102 of system 100 is configured to distribute enriched video stream 140 a and/or 140 b substantially in real-time with respect to receiving video stream 130 from video forwarding unit 101. Moreover, in some implementations, system 100 is configured to distribute video stream 130 or lightly enhanced video stream 142 to non-interactive video player(s) 120 and to distribute enriched video stream 140 a and/or 140 b to interactive video player(s) 122 a-122 c substantially concurrently.
  • As a result of the foregoing, the same base video stream 130 can advantageously be used to distribute enriched video content to interactive video player(s) 122 a-122 c capable of enabling interactive user feedback, as well as to distribute non-enriched video content to non-interactive video player(s) 120 lacking that functionality. The non-enriched video streams distributed to non-interactive video player(s) 120 may provide the content included in video stream 130 by content provider 114, without enhancement or customization. In other words, the customizable video segments included in video stream 130 provided to non-interactive video player(s) 120 may include only the generic and predetermined content inserted into video stream 130 by content provider 114.
  • However, it is noted that in some implementations, video stream 130 may include some basic enhancements and be distributed to one or more of non-interactive video player(s) 120 as lightly enhanced video stream 142 by video enrichment unit 102. For example, where video stream 130 is a broadcast of a sporting event, and one of user profiles 124 a-124 c corresponding to a user of one of non-interactive video player(s) 120 includes data identifying a favorite team of that user, team colors and/or a team logo may be included in lightly enhanced video stream 142 as basic graphical overlays.
  • FIG. 2A shows a more detailed diagram of an exemplary video stream including customizable video segments, according to one implementation. As shown in FIG. 2A video stream 230 includes one or more predetermined video segments 232 a, 232 b, and 232 c (hereinafter “predetermined video segments 232 a-232 c”) and one or more customizable video segments 234 a and 234 b. It is noted that although video stream 230 is shown to include three predetermined video segments 232 a-232 c and two customizable video segments 234 a and 234 b, that representation is provided in the interests of clarity and economy of presentation. More generally, video stream 230 may include more or many more than three predetermined video segments 232 a-232 c. Moreover, video stream 230 includes at least one of customizable video segments 234 a or 234 b, but may include more or many more than two customizable video segments.
  • Predetermined video segments 232 a-232 c include content included in video stream 230 by content provider 114 and are not subject to change. By contrast, customizable video segments 234 a and 234 b include default content in video stream 230 that may be enriched, for example, by being enhanced or replaced by rendered video enhancement 138 in FIG. 1. Video stream 230, in FIG. 2A, corresponds in general to video stream 130, in FIG. 1. That is to say, video stream 130 may share any of the characteristics attributed to corresponding video stream 230 by the present disclosure, and vice versa. Consequently, although not shown in FIG. 1, video stream 130 may include features corresponding respectively to predetermined video segments 232 a-232 c and customizable video segments 234 a and 234 b.
  • FIG. 2B shows a more detailed diagram of exemplary enriched video stream 240 including customizable video segments 234 a and 234 b having respective rendered video enhancements 238 a and 238 b inserted therein, according to one implementation. It is noted that any feature in FIG. 2B identified by a reference number identical to one appearing in FIG. 2A corresponds respectively to that feature and may share any of the characteristics attributed to it above.
  • Enriched video stream 240, in FIG. 2B, corresponds in general to either or both of enriched video streams 140 a and 140 b, in FIG. 1. That is to say, enriched video streams 140 a and 140 b may share any of the characteristics attributed to corresponding enriched video stream 240 by the present disclosure, and vice versa. Consequently, although not shown in FIG. 1, enriched video streams 140 a and 140 b may include features corresponding respectively to predetermined video segments 232 a-232 c and customizable video segments 234 a and 234 b having respective rendered video enhancements 238 a and 238 b inserted therein.
  • In addition, rendered video enhancements 238 a and 238 b correspond in general to rendered video enhancement(s) 138, in FIG. 1, Thus, rendered video enhancements 238 a and 238 b may share any of the characters attributed to rendered video enhancement 138(s) by the present disclosure, and vice versa.
  • The functionality of system 100 will be further described by reference to FIG. 3 in combination with FIGS. 1, 2A, and 2B. FIG. 3 shows flowchart 350 presenting an exemplary method for performing cloud-based image rendering for video stream enrichment, according to one implementation. With respect to the method outlined in FIG. 3, it is noted that certain details and features have been left out of flowchart 350 in order not to obscure the discussion of the inventive features in the present application.
  • Referring to FIG. 3 in combination with FIGS. 1 and 2A, flowchart 350 begins with detecting, by video forwarding unit 101 of system 100, non-interactive video player(s) 120 linked to video forwarding unit 101 over communication network 116 (action 351). Non-interactive video player(s) 120 are video players that are not configured to receive or transmit user inputs enabling the user to actively participate in the action depicted by playback of a video stream. As discussed above, non-interactive video player(s) 120 provide only basic user controls, such as “play,” “pause,” “stop,” “rewind,” and “fast forward,” as well as volume control, thereby enabling only passive consumption of the content included in video stream 130/230 or lightly enhanced video stream 142.
  • Flowchart 350 continues with forwarding, by video forwarding unit 101, video stream 130/230 to non-interactive video player(s) 120 (action 352). In some implementations, video stream 130/230 is forwarded to non-interactive video player(s) 120 as an unenhanced video stream. That is to say, in those implementations, video stream 130/230 includes only the content received by video forwarding unit 101 from content provider 114 when it is forwarded to non-interactive video player(s) 120.
  • However, as noted above, in some implementations, video stream 130/230 may include some basic enhancements and be distributed to one or more of non-interactive video player(s) 120 as lightly enhanced video stream 142 by video enrichment unit 102. For example, where video stream 130/230 is a broadcast of a sporting event, and one of user profiles 124 a-124 c corresponding to a user of one of non-interactive video player(s) 120 includes data identifying a favorite team of that user, team colors and/or a team logo may be included in lightly enhanced video stream 142 as basic graphical overlays. Alternatively, non-interactive video player(s) 120 may receive basic enhancements based on their geographical location. For instance, non-interactive video player(s) 120 located in the state of Texas and receiving lightly enhanced video stream 142 of a professional football game may have team colors and/or team logos for both Texas based professional football franchises in lightly enhanced video stream 142 as graphical overlays.
  • It is noted that the “light enhancements” included in lightly enhanced video stream 142 by video enrichment unit 102 may be distinguished from the video enhancements rendered as rendered video enhancement(s) 138/238 a/238 b in that, by contrast to rendered video enhancements 138/238 a/238 b, the light enhancements included in lightly enhanced video stream 142 neither require nor invite user feedback. On the contrary, the light video enhancements included in lightly enhanced video stream 142 are typically limited to two-dimensional (2D) or 3D graphical overlays of team colors and/or logos, and/or player statistics provided for passive consumption by user(s) of non-interactive video player(s) 120 as an accompaniment to the content included in video stream 130.
  • Flowchart 350 continues with forwarding, by video forwarding unit 101, video stream 130/230 to video enrichment unit 102 (action 353), and receiving, by video enrichment unit 102, video stream 130/230 from video forwarding unit 101 (action 354). In some implementations, video stream 130/230 is forwarded to and received by video enhancement unit 102 as an unenhanced video stream. That is to say, in those implementations, video stream 130/230 includes only the content received by video forwarding unit 101 from content provider 114 when it is forwarded to video enhancement unit 102. Video stream 130/230 may be received by video enhancement unit 102 through use of software code 110, executed by hardware processor 104.
  • Flowchart 350 continues with detecting, by video enrichment unit 102, interactive video player(s) 122 a-122 c linked to video enrichment unit 102 over communication network 116 (action 355). As discussed above, interactive video player(s) 122 a-122 c are video players including a user interface, such as a web browser for example, supporting user feedback enabling the user of the interactive video player to actively participate in the action depicted by playback of a video stream by the interactive video player. Thus, interactive video player(s) 122 a-122 c may take the form of smartphones, desktop or laptop computers, tablet computers, smart TVs, or gaming consoles, for example. Detection of interactive video player(s) 122 a-122 c may be performed by software code 110 of video enrichment unit 102, executed by hardware processor 104.
  • Flowchart 350 continues with identifying, by video enrichment unit 102, a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130/230 (action 356). Identification of a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b may be performed by software code 110 of video enrichment unit 102, executed by hardware processor 104, and may be based on any of several different criteria. In the interests of conceptual clarity, action 356 and the subsequent actions of flowchart 350 will be described by reference to interactive video player 122 a. However, it is noted that the present method is equally applicable to any of interactive video player(s) 122 a-122 c.
  • Where one of user profiles 124 a-124 c, such as user profile 124 a for example, corresponds to a user of interactive video player 122 a, information stored in user profile 124 a can be used to identify a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130/230. For instance, a viewing history of the user, user preferences stored in user profile 124 a, or the user's age, gender, or location of residence may be used to identify a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130/230. Alternatively, or in addition, information stored in user profile 124 a describing the interactive features and functionality of interactive video player 122 a may be used to identify a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b. Thus, in some implementations, identification of a video enhancement in action 356 may be performed automatically, without user feedback.
  • However, in other implementations, the user of interactive video player 122 a may affirmatively provide user input data 126 a selecting or otherwise enabling identification of a desirable video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in video stream 130/230. For example, in some implementations, particularly in use cases in which the information stored in user profile 124 a is sparse, user input data 126 a may be solicited from the user of interactive video player 122 a. User input data 126 a may identify a favorite sports team, athlete, movie, TV program, dramatic character, or actor of the user of interactive video player 122 a, to name a few examples.
  • Referring to flowchart 350 with further reference to FIG. 2B, flowchart 350 continues with inserting, by video enrichment unit 102, rendered video enhancement(s) 138/238 a/238 b into one or more of customizable video segments 234 a and 234 b to produce enriched video stream 140/240 (action 357). Insertion of rendered video enhancement(s) 138/238 a/238 b into one or more of customizable video segments 234 a and 234 b may be performed by software code 110 of video enrichment unit 102, executed by hardware processor 104.
  • In some implementations, rendered video enhancement(s) 138/238 a/238 b may take the form of a user interactive object rendered as a 3D image. For example, user input data 126 a from a user of interactive video player 122 a watching a sporting event may cause video enrichment unit 102 to insert rendered video enhancement(s) 138/238 a/238 b in the form of 3D renderings of the playing field and athletes from a perspective selected by the user. In addition, or alternatively, in such an implementation, rendered video enhancement(s) 138/238 a/238 b may take the form of a 3D overlay showing team statistics or statistics for one or more individual athletes. Moreover, in some implementations, rendered video enhancement(s) 138/238 a/238 b may take the form of an interactive quiz, game, or poll enabling multiple users to collaboratively participate in providing feedback and/or selecting additional video enhancements for rendering and insertion into one or more of customizable video segments 234 a and 234 b.
  • As shown in FIG. 1, in some implementations, video enrichment unit 102 may obtain rendered video enhancement(s) 138/238 a/238 b from one of video rendering engine(s) 108 communicatively coupled to video enrichment unit 102. As further shown in FIG. 1, in some of those implementations, video enrichment unit 102 may be communicatively coupled to video rendering engine(s) 108 via communication network 116. As noted above, in some use cases, communication network 116 may be a packet-switched network, such as the Internet. However, in other implementations, communication network 116 may include a broadband cellular network, such as a fourth generation cellular network (4G network), or a 5G network satisfying the IMT-2020 requirements established by the International Telecommunication Union (ITU), for example.
  • Thus, in some implementations, video enrichment unit 102 may identify a video enhancement corresponding to one or more of customizable video segments 234 a and 234 b in action 356, but the rendering of that video enhancement to produce rendered video enhancement(s) 138/238 a/238 b may be performed by one of video rendering engine(s) 108. In some of those implementations, video rendering engine(s) 108 may be configured to encode rendered video enhancement(s) 138/238 a/238 b as one of Hypertext Transfer Protocol (HTTP) Live Stream (HLS) video, low-latency HLS (LHLS) video, or Dynamic Adaptive Streaming over HTTP (DASH) video, for example.
  • Moreover, in some implementations in which the rendering of rendered video enhancement(s) 138/238 a/238 b is performed by one of video rendering engine(s) 108, rendered video enhancement(s) 138/238 a/238 b may include a metadata tag applied by video rendering engine(s) 108 that instructs interactive video player 122 a to enable interactive feedback by the user of interactive video player 122 a during playback of rendered video enhancement(s) 138/238 a/238 b. For example, such a metadata tag may take the form of an ID3 tag stored in rendered video enhancement(s) 138/238 a/238 b by video rendering engine(s) 108. It is noted that rendered video enhancement(s) 138/238 a/238 b may be generated from video enhancement templates stored by the distributed memory resources of cloud-based system 100 and accessible by rendering engine(s) 108.
  • In some implementations, Flowchart 350 can conclude with distributing, by video enrichment unit 102, enriched video stream 140 a/240 to one or more of interactive video player(s) 122 a-122 c (action 358). Distribution of enriched video stream 140 a/240 in action 358 may be performed by software code 110 of video enrichment unit 102, executed by hardware processor 104, and via communication network 116. As noted above, in some implementations, distribution of enriched video stream 140 a/240 to one or more of interactive video player(s) 122 a-122 c may occur in real-time with respect to video enrichment unit 102 receiving video stream 130/230 from video forwarding unit 101.
  • In some use cases, enriched video stream 140 a/240 may be produced and distributed to a single interactive video player, such as interactive video player 122 a. However, in other implementations, more than one interactive video player may be used by a group of users sharing the experience of viewing video content together. In those implementations, the same enriched video stream may be distributed to multiple interactive video players, as shown in FIG. 1 by enriched video stream 140 a being distributed to both of interactive video players 122 a and 122 b. That is to say, in some implementations, enriched video stream 140 a may be distributed to more than one but less than all of interactive video player(s) 122 a-122 c linked to video enrichment unit 102.
  • It is noted that the order in which actions 351 through 358 are described by flowchart 350 is merely exemplary. That is to say, in other implementations, actions 351 through 358 may have their order rearranged, and/or some of actions 351-358 may be performed substantially in parallel. For example, in one implementation, actions 351 and 355 may be performed in parallel, while in another implementation, action 355 may be performed before actions 351 through 354. Alternatively, in some implementations, actions 352 and 353 may be performed in parallel. As yet another example, in some implementations, actions 351 and 352 may be performed after action 358.
  • It is also noted that actions 356, 357, and 358 may be performed in sequence for multiple different interactive video player(s) 122 a-122 c concurrently. That is to say, concurrently with performance of actions 356, 357, and 358 of flowchart 350 for interactive video player 122 b and/or 122 a described above, those same actions may be performed by video enrichment unit 102 for one or more others of interactive video player(s) 122 a-122 c. For example, in the case of interactive video player 122 c, video enrichment unit 102 may identify a different video enhancement corresponding to one or more of customizable video segments 234 a and 234 b for interactive video player 122 c, and may insert that different video enhancement as a different rendered video enhancement 138/238 a/238 b into customizable video segment 234 a and/or 234 b to produce different enriched video stream 140 b/240. Video enrichment unit 102 may then distribute different enriched video stream 140 b/240 to interactive video player 122 c concurrently with distributing enriched video stream 140 a/240 to interactive video player(s) 122 b and/or 122 a.
  • Thus, the present application discloses systems and methods for performing cloud-based image rendering for video stream enrichment. As described above, the systems and methods disclosed herein augment what users of interactive video player(s) 122 a-122 c are able to see when tuning into video stream 130/230, without requiring the user to use a customized application that runs on their device. Instead of creating a custom application that runs on a user's interactive video player that is capable of providing the graphics rendering required for such an interactive experience, according to the present inventive principles, graphical video enhancements may be rendered for the interactive video player on one of remote video rendering engine(s) 108 that is driven by user input data, such as user input data 126 a, 126 b, or 126 c.
  • The cloud-based video enrichment solution disclosed in the present application confers several benefits. First, the present solution allows a user's interaction experience with a video stream to be augmented or otherwise enriched by content that could not ordinarily be rendered on their video player due to performance limitations. That is to say, because, according to the present inventive principles, video enhancements such as 3D graphics are remotely rendered and distributed as part of enriched video stream 140 a/140 b/240, the only constraint on the quality of rendered video enhancement(s) 138/238 a/238 b is the processing capability of cloud-based video rendering engine(s) 108. Thus, it is emphasized that according to the present inventive principles, rendering of a video enhancement or enhancements identified in action 356 of flowchart 350 is not rendered by the interactive video player receiving that video enhancement.
  • Second, the present solution supports the continued use of existing video streaming applications and existing web browser based video players while providing new and heretofore unavailable interactive experiences to users. The same video stream is forwarded to the video forwarding unit 101 and the non-interactive video player(s) 120, for example, from content provider 114. Moreover, no additional software is required to view enriched video streams 140 a/140 b/240 or lightly enhanced video stream 142 in use cases in which rendered video enhancements 138/238 a/238 b are encoded to conform to well established standards, such as HLS, LHLS, or DASH.
  • Third, interactive views for the user of an interactive video player are not limited solely to 2D overlays, but may also include detailed 3D renderings of objects, rendered at a quality level that is unconstrained by the graphics processing capability of the user's video player (e.g., interactive video player(s) 122 a-122 c). For example, a video stream of a weather report could be enriched with 3D renderings of topography and visual effects for weather.
  • Fourth, interactive video player(s) 122 a-122 c with interactive capabilities advantageously receive enriched video stream 140 a/140 b/240 with rendered video enhancements 138/238 a/238 b, while non-interactive video player(s) 120 receive either standard static video stream 130/230 forwarded by video forwarding unit 101 or advantageously receive lightly enhanced video stream 142 from video enrichment unit 102.
  • From the above description it is manifest that various techniques can be used for implementing the concepts described in the present application without departing from the scope of those concepts. Moreover, while the concepts have been described with specific reference to certain implementations, a person of ordinary skill in the art would recognize that changes can be made in form and detail without departing from the scope of those concepts. As such, the described implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present application is not limited to the particular implementations described herein, but many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure.

Claims (20)

1. A cloud-based system comprising:
a video forwarding computer configured to:
detect, over a communication network, at least one non-interactive video player linked to the video forwarding computer;
forward, over the communication network, a video stream to the at least one non-interactive video player, wherein the video stream includes at least one predetermined video segment and at least one customizable video segment; and
concurrently with forwarding the video stream to the at least one non-interactive video player, forward, over the communication network, the video stream to a video enrichment computer;
the video enrichment computer configured to:
receive, over the communication network, the video stream from the video forwarding computer;
detect, over the communication network, at least one interactive video player linked to the video enrichment computer;
start streaming, over the communication network, the at least one predetermined video segment of the video stream received from the video forwarding computer to the at least one interactive video player;
while streaming the at least one predetermined video segment of the video stream received from the video forwarding computer to the at least one interactive video player:
identify a video enhancement corresponding to the at least one customizable video segment in the video stream received from the video forwarding computer;
insert a rendered video enhancement into the at least one customizable video segment of the video stream received from the video forwarding computer to produce a customized video segment; and
continue streaming, over the communication network, the video stream, including the customized video segment, to the at least one interactive video player.
2. The cloud-based system of claim 1, wherein the customized video segment of the video stream is streamed to the at least one interactive video player in real-time with respect to receiving the video stream from the video forwarding computer.
3. The cloud-based system of claim 1, wherein the rendered video enhancement comprises a user interactive object rendered as a three-dimensional (3D) image.
4. The cloud-based system of claim 1, wherein the video enrichment computer is further configured to obtain the rendered video enhancement from a video rendering engine communicatively coupled to the video enrichment computer.
5. The cloud-based system of claim 4, wherein the video enrichment computer is communicatively coupled to the video rendering engine via the communication network.
6. The cloud-based system of claim 4, wherein the video rendering engine is configured to encode the rendered video enhancement as one of Hypertext Transfer Protocol (HTTP) Live Stream (HLS) video, low-latency HLS (LHLS) video, and Dynamic Adaptive Streaming over HTTP (DASH) video.
7. The cloud-based system of claim 4, wherein the rendered video enhancement comprises a metadata tag applied by the video rendering engine, the metadata tag instructing the at least one interactive video player to enable interactive feedback by a user during playback of the rendered video enhancement.
8. The cloud-based system of claim 7, wherein the metadata tag is an ID3 tag stored in the rendered video enhancement by the at least one video rendering engine.
9. The cloud-based system of claim 1, wherein the at least one interactive video player comprises at least a first interactive video player, a second interactive video player, and a third interactive video player, and wherein the video enrichment computer is further configured to stream the customized video segment to each of the first interactive video player and the second interactive video player, but not to the third interactive video player.
10. The cloud-based system of claim 1, wherein the at least one interactive video player comprises a plurality of interactive video players, and wherein the video enrichment computer is further configured to:
identify a different video enhancement corresponding to the at least one customizable video segment for another of the plurality of interactive video players;
insert the different video enhancement as a different rendered video enhancement into the at least one customizable video segment to produce a different customized video segment; and
stream the different customized video segment to the another of the plurality of interactive video players concurrently with streaming the customized video segment to the at least one interactive video player.
11. A method for use by a cloud-based system including a video forwarding computer and a video enrichment computer, the method comprising:
detecting, over a communication network, by the video forwarding computer, at least one non-interactive video player linked to the video forwarding computer;
forwarding, over the communication network, by the video forwarding computer, a video stream to the at least one non-interactive video player, wherein the video stream includes at least one predetermined video segment and at least one customizable video segment;
concurrently with forwarding the video stream to the at least one non-interactive video player, forwarding, over the communication network, by the video forwarding computer, the video stream to a video enrichment computer;
receiving, over the communication network, by the video enrichment computer, the video stream from the video forwarding computer;
detecting, over the communication network, by the video enrichment computer, at least one interactive video player linked to the video enrichment computer;
starting to stream, over the communication network, the at least one predetermined video segment of the video stream received from the video forwarding computer to the at least one interactive video player;
while streaming the at least one predetermined video segment of the video stream received from the video forwarding computer to the at least one interactive video player:
identifying, by the video enrichment computer, a video enhancement corresponding to the at least one customizable video segment in the video stream received from the video forwarding computer;
inserting, by the video enrichment computer, a rendered video enhancement into the at least one customizable video segment of the video stream received from the video forwarding computer to produce a customized video segment; and
continuing to stream, over the communication network, by the video enrichment computer, the video stream, including the customized video segment, to the at least one interactive video player.
12. The method of claim 11, wherein the customized video segment of the video stream is streamed to the at least one interactive video player in real-time with respect to receiving the video stream from the video forwarding computer.
13. The method of claim 11, wherein the rendered video enhancement comprises a user interactive object rendered as a three-dimensional (3D) image.
14. The method of claim 11, wherein the video enrichment computer obtains the rendered video enhancement from a video rendering engine communicatively coupled to the video enrichment computer.
15. The method of claim 14, wherein the video enrichment computer is communicatively coupled to the video rendering engine via the communication network.
16. The method of claim 14, wherein the video rendering engine is configured to encode the rendered video enhancement as one of Hypertext Transfer Protocol (HTTP) Live Stream (HLS) video, low-latency HLS (LHLS) video, and Dynamic Adaptive Streaming over HTTP (DASH) video.
17. The method of claim 14, wherein the rendered video enhancement comprises a metadata tag applied by the video rendering engine, the metadata tag instructing the at least one interactive video player to enable interactive feedback by a user during playback of the rendered video enhancement.
18. The method of claim 17, wherein the metadata tag is an ID3 tag stored in the rendered video enhancement by the at least one video rendering engine.
19. The method of claim 11, wherein the at least one interactive video player comprises at least a first interactive video player, a second interactive video player, and a third interactive video player, and wherein the video enrichment computer is further configured to stream the customized video segment to each of the first interactive video player and the second interactive video player, but not to the third interactive video player.
20. The method of claim 11, wherein the at least one interactive video player comprises a plurality of interactive video players, the method further comprising:
identifying, by the video enrichment computer, a different video enhancement corresponding to the at least one customizable video segment for another of the plurality of interactive video players;
inserting, by the video enrichment computer, the different video enhancement as a different rendered video enhancement into the at least one customizable video segment to produce a different customized video segment; and
streaming, by the video enrichment computer, the different customized video segment to the another of the plurality of interactive video players concurrently with streaming the customized video segment to the at least one interactive video player.
US16/551,467 2019-08-26 2019-08-26 Cloud-based image rendering for video stream enrichment Active US10924823B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US16/551,467 US10924823B1 (en) 2019-08-26 2019-08-26 Cloud-based image rendering for video stream enrichment
JP2020112131A JP7063941B2 (en) 2019-08-26 2020-06-29 Cloud-based image rendering for video stream enrichment
GB2009913.1A GB2588271B (en) 2019-08-26 2020-06-29 Cloud-based image rendering for video stream enrichment
KR1020200082769A KR102380620B1 (en) 2019-08-26 2020-07-06 Cloud-based image rendering for video stream enrichment
CA3086239A CA3086239C (en) 2019-08-26 2020-07-09 Cloud-based image rendering for video stream enrichment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16/551,467 US10924823B1 (en) 2019-08-26 2019-08-26 Cloud-based image rendering for video stream enrichment

Publications (2)

Publication Number Publication Date
US10924823B1 US10924823B1 (en) 2021-02-16
US20210067844A1 true US20210067844A1 (en) 2021-03-04

Family

ID=71949675

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/551,467 Active US10924823B1 (en) 2019-08-26 2019-08-26 Cloud-based image rendering for video stream enrichment

Country Status (5)

Country Link
US (1) US10924823B1 (en)
JP (1) JP7063941B2 (en)
KR (1) KR102380620B1 (en)
CA (1) CA3086239C (en)
GB (1) GB2588271B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210106912A1 (en) * 2019-10-11 2021-04-15 Nvidia Corporation Hardware acceleration and event decisions for late latch and warp in interactive computer products
EP4354878A1 (en) * 2022-10-11 2024-04-17 Disney Enterprises, Inc. Multi-variant content streaming

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116867554A (en) * 2020-12-31 2023-10-10 索尼互动娱乐股份有限公司 Data display overlay for electronic athletic flow
US20220345794A1 (en) 2021-04-23 2022-10-27 Disney Enterprises, Inc. Creating interactive digital experiences using a realtime 3d rendering platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100162307A1 (en) * 2008-11-18 2010-06-24 Lg Electronics Inc. Method for receiving a broadcast signal and broadcast receiver
US20100257550A1 (en) * 2009-04-01 2010-10-07 Fourthwall Media Systems, methods, and apparatuses for enhancing video advertising with interactive content
US20180131975A1 (en) * 2016-11-09 2018-05-10 Charter Communications Operating, Llc Apparatus and methods for selective secondary content insertion in a digital network
US20200066304A1 (en) * 2018-08-27 2020-02-27 International Business Machines Corporation Device-specific video customization

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2290982A1 (en) * 2009-08-25 2011-03-02 Alcatel Lucent Method for interactive delivery of multimedia content, content production entity and server entity for realizing such a method
US8893168B2 (en) * 2012-02-07 2014-11-18 Turner Broadcasting System, Inc. Method and system for synchronization of dial testing and audience response utilizing automatic content recognition
US20150032900A1 (en) 2012-02-15 2015-01-29 Spencer Shanson System for seamlessly switching between a cloud-rendered application and a full-screen video sourced from a content server
WO2014145921A1 (en) * 2013-03-15 2014-09-18 Activevideo Networks, Inc. A multiple-mode system and method for providing user selectable video content
KR20140118604A (en) * 2013-03-29 2014-10-08 인텔렉추얼디스커버리 주식회사 Server and method for transmitting augmented reality object to personalized
US10375434B2 (en) * 2014-03-11 2019-08-06 Amazon Technologies, Inc. Real-time rendering of targeted video content
US10194177B1 (en) 2014-10-16 2019-01-29 Sorenson Media, Inc. Interweaving media content
US20170366867A1 (en) * 2014-12-13 2017-12-21 Fox Sports Productions, Inc. Systems and methods for displaying thermographic characteristics within a broadcast
US9774891B1 (en) * 2016-03-28 2017-09-26 Google Inc. Cross-platform end caps
WO2018213481A1 (en) 2017-05-16 2018-11-22 Sportscastr.Live Llc Systems, apparatus, and methods for scalable low-latency viewing of integrated broadcast commentary and event video streams of live events, and synchronization of event information with viewed streams via multiple internet channels
US11245964B2 (en) 2017-05-25 2022-02-08 Turner Broadcasting System, Inc. Management and delivery of over-the-top services over different content-streaming systems
US10271077B2 (en) 2017-07-03 2019-04-23 At&T Intellectual Property I, L.P. Synchronizing and dynamic chaining of a transport layer network service for live content broadcasting
US10306293B2 (en) 2017-07-18 2019-05-28 Wowza Media Systems, LLC Systems and methods of server based interactive content injection
GB201715124D0 (en) 2017-09-19 2017-11-01 Real Time Objects Ltd Graphics streaming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100162307A1 (en) * 2008-11-18 2010-06-24 Lg Electronics Inc. Method for receiving a broadcast signal and broadcast receiver
US20100257550A1 (en) * 2009-04-01 2010-10-07 Fourthwall Media Systems, methods, and apparatuses for enhancing video advertising with interactive content
US20180131975A1 (en) * 2016-11-09 2018-05-10 Charter Communications Operating, Llc Apparatus and methods for selective secondary content insertion in a digital network
US20200066304A1 (en) * 2018-08-27 2020-02-27 International Business Machines Corporation Device-specific video customization

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210106912A1 (en) * 2019-10-11 2021-04-15 Nvidia Corporation Hardware acceleration and event decisions for late latch and warp in interactive computer products
US11660535B2 (en) * 2019-10-11 2023-05-30 Nvidia Corporation Hardware acceleration and event decisions for late latch and warp in interactive computer products
EP4354878A1 (en) * 2022-10-11 2024-04-17 Disney Enterprises, Inc. Multi-variant content streaming

Also Published As

Publication number Publication date
GB202009913D0 (en) 2020-08-12
GB2588271B (en) 2022-12-07
KR102380620B1 (en) 2022-03-31
KR20210025473A (en) 2021-03-09
GB2588271A (en) 2021-04-21
CA3086239A1 (en) 2021-02-26
CA3086239C (en) 2023-02-14
US10924823B1 (en) 2021-02-16
JP7063941B2 (en) 2022-05-09
JP2021035044A (en) 2021-03-01

Similar Documents

Publication Publication Date Title
US10924823B1 (en) Cloud-based image rendering for video stream enrichment
US11580699B2 (en) Systems and methods for changing a users perspective in virtual reality based on a user-selected position
US11778138B2 (en) System and methods providing supplemental content to internet-enabled devices synchronized with rendering of first content
US9384424B2 (en) Methods and systems for customizing a plenoptic media asset
US11113897B2 (en) Systems and methods for presentation of augmented reality supplemental content in combination with presentation of media content
US20180167686A1 (en) Interactive distributed multimedia system
KR102589628B1 (en) System and method for minimizing occlusion of media assets by overlays by predicting the movement path of objects of interest in media assets and avoiding placing overlays on the movement path
US10158917B1 (en) Systems and methods for generating customized shared viewing experiences in virtual reality environments
CN108293140B (en) Detection of common media segments
US9409081B2 (en) Methods and systems for visually distinguishing objects appearing in a media asset
JP2009022010A (en) Method and apparatus for providing placement information of content to be overlaid to user of video stream
US20220321951A1 (en) Methods and systems for providing dynamic content based on user preferences
US9069764B2 (en) Systems and methods for facilitating communication between users receiving a common media asset
WO2016032399A1 (en) Selecting adaptive secondary content based on a profile of primary content
EP3732583A1 (en) Systems and methods for generating customized shared viewing experiences in virtual reality environments
Johnston ‘Pop-out footballers’, pop concerts and popular films: The past, present and future of three-dimensional television
US11902625B2 (en) Systems and methods for providing focused content
US20220360839A1 (en) Accessibility Enhanced Content Delivery
US20140373062A1 (en) Method and system for providing a permissive auxiliary information user interface
US20120185890A1 (en) Synchronized video presentation
WO2020131059A1 (en) Systems and methods for recommending a layout of a plurality of devices forming a unified display

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BINDER, EVAN A.;SWERDLOW, AVNER;SIGNING DATES FROM 20190815 TO 20190823;REEL/FRAME:050217/0662

Owner name: THE WALT DISNEY COMPANY (SWITZERLAND) GMBH, SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MARTIN, MARC JUNYENT;PUJOL, JORDI BADIA;FARRE GUIU, MIQUEL ANGEL;SIGNING DATES FROM 20190815 TO 20190822;REEL/FRAME:050217/0590

Owner name: DISNEY ENTERPRISES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THE WALT DISNEY COMPANY (SWITZERLAND) GMBH;REEL/FRAME:050235/0885

Effective date: 20190823

STCF Information on status: patent grant

Free format text: PATENTED CASE