WO2023022697A1 - Scalable system and method using logical entities for production of programs that use multi-media signals - Google Patents
Scalable system and method using logical entities for production of programs that use multi-media signals Download PDFInfo
- Publication number
- WO2023022697A1 WO2023022697A1 PCT/US2021/030421 US2021030421W WO2023022697A1 WO 2023022697 A1 WO2023022697 A1 WO 2023022697A1 US 2021030421 W US2021030421 W US 2021030421W WO 2023022697 A1 WO2023022697 A1 WO 2023022697A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- stream
- logical entity
- signal
- routing
- input
- Prior art date
Links
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000004891 communication Methods 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 15
- 230000009466 transformation Effects 0.000 claims description 37
- 230000002085 persistent effect Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 230000000699 topical effect Effects 0.000 claims description 3
- 238000011867 re-evaluation Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000012384 transportation and delivery Methods 0.000 description 8
- 238000009826 distribution Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000000844 transformation Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000012805 post-processing Methods 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 101001091379 Homo sapiens Kallikrein-5 Proteins 0.000 description 2
- 102100034868 Kallikrein-5 Human genes 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010036618 Premenstrual syndrome Diseases 0.000 description 1
- 241001400591 Trikeraia Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229920003208 poly(ethylene sulfide) Polymers 0.000 description 1
- 229920006393 polyether sulfone Polymers 0.000 description 1
- 238000012367 process mapping Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/23605—Creation or processing of packetized elementary streams [PES]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/236—Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
- H04N21/2362—Generation or processing of Service Information [SI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/238—Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
- H04N21/23805—Controlling the feeding rate to the network, e.g. by controlling the video pump
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/25—Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
- H04N21/258—Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
- H04N21/25808—Management of client data
- H04N21/25841—Management of client data involving the geographical location of the client
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/633—Control signals issued by server directed to the network components or client
- H04N21/6338—Control signals issued by server directed to the network components or client directed to network
Definitions
- the invention generally relates to a system and method for production of programs that require filtering and routing of multi-media (e.g., audiovisual) signals used in the programs.
- multi-media e.g., audiovisual
- streams originated at multiple sources are enriched before being ultimately presented to consumers.
- a program can be any one or combination of streamed content used for consumption by any audience, user, viewer, subscriber, etc.
- streamed multi-media content contains both video and audio data
- streaming is often referred to as video streaming or streaming with the understanding that streams carry both video and audio data.
- This specification refers to multi-media streaming, video streaming and streaming interchangeably.
- Video and audio codec formats refer to technologies used to both create and play back digital video and audio. Each format specifies how video and audio is combined.
- Video transcoding sometimes called video encoding, is conversion from one digital encoding format to another, such as for movie data files. Video transcoding involves translating three elements of a digital video at the same time — the file format, the video, and the audio.
- a video engine is underlying code used to support High Definition (HD) recording, playback, and editing.
- HD High Definition
- Video on demand is a known distribution service employed by such companies as NetHix, Apple, etc, who are owners or publishers of streams conveyed to subscribers. VOD allows such subscribers to access multi-media streams without the constraints of static broadcasting schedules.
- live streaming of public or private programs where content originated at venues, such as stadiums, concert halls, TV studios, etc. are transmitted to viewers in real time using the Internet.
- IP Internet Protocol
- IP Internet Protocol
- Live streaming can, for example, be implemented over the Internet using systems and methods disclosed in US Patent No. 8,599,851 issued to Amir et al. titled “System and method that routes flows via multicast flow transport for groups”; US Patent No. 8,437,267 issued to Amir et al. titled “System and method for recovery of packets in overlay networks”; US Patent No. 8,619,775 issued to Amir et al. titled “Scalable flow transport and delivery network and associated methods and systems”; US Patent No. 9,106,569 issued to Amir et al. titled “System and method that routes flows via multicast flow transport for groups”; and US Patent No. 8,181,210 issued to Amir et al.
- SRT is a transport protocol that delivers high-quality streams at low latency over noisy networks like the Internet.
- the protocol optimizes transport of streams over unpredictable networks by adapting to real-time network conditions, minimizing packet loss and creating a better viewing experience.
- Hypertext Transfer Protocol Live Streaming
- HTTP Live Streaming
- HTTP server receives the request and sends a response message with the stream to the browser.
- HTTP server maintains no information about the client, and if the client asks for the same stream again, the server resends the stream. For this reason, HTTP is called a stateless protocol. HTTP can use both non-persistent or volatile connections and persistent connections.
- a volatile connection is a connection that is closed after a HTTP server sends a requested stream to a client.
- the connection is used for one request and one response.
- the HTTP server leaves the connections open after sending the response and hence subsequent requests and responses between the same client and server can be exchanged.
- the server closes a connection only when it is not used for a certain configurable amount of time.
- HLS works by breaking the overall stream into a sequence of small HTTP-based file downloads, each download containing one short chunk of an overall potentially unbounded stream.
- RTMP Real-Time Messaging Protocol
- TCP-based protocol which maintains persistent connections and allows for low-latency communication.
- RTMP splits streams into fragments, and their size is negotiated dynamically between clients and servers.
- RTMP defines several independent virtual channels used for sending and receiving packet streams.
- Real-time Transport Protocol is a network protocol for delivering streams over IP networks.
- RTP is used in communication and entertainment systems that involve streaming and video teleconferencing.
- RTP typically runs over User Datagram Protocol (UDP).
- UDP User Datagram Protocol
- RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the streams, RTCP is used to monitor transmission statistics and quality of service (QoS) and aids synchronization of multiple streams.
- Real Time Streaming Protocol is a network control protocol designed for use in entertainment and communications systems to control media servers. The protocol is used for establishing and controlling media sessions between endpoints. The transmission of streaming video/audio data itself is not a task performed by RTSP. Most RTSP servers use the RTP in conjunction with RTCP for streaming.
- RTP Web Real-Time Communication
- WebRTC Web Real-Time Communication
- APIs application programming interfaces
- PMT Program Mapping Table
- Dynamic Adaptive Streaming over HTTP also known as MPEG-DASH
- MPEG-DASH is an adaptive bitrate streaming technique that enables high quality streaming over the Internet.
- MPEG-DASH can be delivered from conventional HTTP web servers.
- MPEG- DASH works by breaking the content into a sequence of small HTTP-based file segments with each segment containing a short interval of playback time of content that is potentially many hours in duration, such as a movie or the live broadcast of a sport events.
- the content is made available at a variety of different bit rates, where alternative segments are encoded at different bit rates covering aligned short intervals of playback time.
- MPEG-DASH allows devices like Internet-connected televisions, TV set-top boxes, desktop computers, smartphones, tablets, etc. to consume multi-media content delivered over Internet, thereby coping with variable Internet receiving conditions.
- FIG. 1 shows implementation of content delivery in a system, which receives streams, for example bounded TS streams for programs, or Streamtag over unbounded WebRTC streams, or streams from an external device.
- streams are ingested by media servers known as Ingest servers.
- the ingested streams can be sent to output devices, recorder devices or multi-view devices.
- Ingest servers media servers known as Ingest servers.
- the ingested streams can be sent to output devices, recorder devices or multi-view devices.
- various systems, platforms, hardware, software, networks, facilities, etc. are required to develop, test, deliver, monitor, control or support streaming.
- video streams are “ingested” by bringing them from multiple venues into a production platform where users interface with one or more Ingest Servers.
- programs that use signals such as multimedia signals, received over communication channels from one or more sources are produced in a computing system by executing an application software in one or more servers of the computing system. At least one of the servers has one or more processors that process defined logical entities.
- a signal used in production of a program is received from a source over a communication channel at a server.
- An input logical entity having attributes associated with the source of the signal is defined such that the input logical entity is responsive to a user defined predicate comprising a logical expression for accepting or rejecting the signal.
- Also defined are a stream logical entity that identifies an accepted signal and a routing logical entity that establishes a connection between the stream logical entity and a destination. The accepted signal is routed to the destination based on a routing rule.
- an output logical entity is defined that associates the stream logical entity with the destination such that the output logical entity transforms the stream logical entity according to a user defined program transformation rule that is evaluated for mapping the accepted signal to the destination.
- the signal can be an MPEG signal, where the user defined program transformation rule is based on Program Specific information (PSI).
- PSI Program Specific information
- the stream logical entity is associated with Packetized Elementary Stream (PES), where the PSI includes Program Mapping Table (PMT) associated with the stream logical entity.
- PMT Program Mapping Table
- an inter arrival time (IAT) of packets of the MPEG signal is recorded at a server and used by the output logical entity for achieving a constant bitrate.
- the routing rule is a user defined logical rule associated with the input logical entity such that a user defined logical rule is evaluated when the signal is applied to the input logical entity.
- the user defined predicate is based on time of day, a geographical zone, a location or an IP address.
- the input logical entity is defined as a persistent logical entity by a user such that the input logical entity is configured by the user via graphical user interfaces or Application Programing Interfaces (APIs).
- APIs Application Programing Interfaces
- the stream logical entity is associated with a session tag that creates a topical connection and/or a unique identification (ID) of the owner or publisher of the signal.
- ID unique identification
- a characteristic of the signal is recognized based on a confidence level and the recognized characteristic is used as a metadata associated with the signal.
- FIG. 1 shows implementation of content delivery in a system, which receives streams.
- FIG. 2 shows implementation of the present invention in a Live Video Cloud (LVC) platform used for production, post-processing, and distribution of programs through streaming.
- LVC Live Video Cloud
- FIG. 3 shows a diagram that layers the physical and logical components of the LVC platform.
- FIG. 4 shows a functional block diagram for attributes of an Input, where a Device provides a Video Signal to an Ingest load balancer coupled to a plurality of Ingest servers.
- FIGs. 5 -12 show the process for creating Inputs in the LVC platform according to the present invention, where:
- FIG. 5 shows a portal for creating Input via a user interface.
- FIG. 6 shows selection of the “General” tab, which allows the operator to enter generic information that would make the Input discoverable within the LVC platform.
- FIG. 7 shows selection of the “Setting” tab, which contains SRT Push specific settings.
- FIG. 8 shows a selected datacenter from a list where the URL is served from.
- FIG. 9 shows other configurations like the latency, which in this example is set at 50ms.
- FIG. 10 shows predicates.
- FIG. 11 shows creating a first predicate that accepts or admits Streams only from a given IP address and a second predicate that requires the Streams to be accepted or admitted between 8pm and 10pm.
- FIG. 12 when the operator clicks the ‘Create’ button, the LVC makes a request to relevant API to create a record for the created Input in the DBSM.
- FIGs. 13 -18 show the process for creating Outputs in the LVC platform according to the present invention, where:
- FIG. 13 shows a portal for creating Output via a user interface.
- FIG. 14 shows selection of the “General” tab, which allows the operator to enter generic information that would make the Output discoverable within the LVC platform.
- FIG. 15 shows selection of the “Setting” tab, which allows a user to set a delay, frame rate resolution as well as interlacing mode.
- FIG. 16 shows selection of the “Program Transformations” tab, which allows a user to set Transformation parameters.
- FIGs. 17 and 18 shows selection of the “Destination” tab, which allows a user to set destination parameters.
- FIG. 19 shows attributes that define Streams.
- FIG. 20 shows implementation of Routing Rules.
- FIG. 21 shows Routing when connecting a Source of a Stream to a Target.
- FIG. 22 shows example of a flow diagram amongst entities of the LVC platform.
- FIG. 23 shows a flow chart that exemplifies implementing an embodiment of the invention in a method implemented by logic in the LVC platform.
- FIG. 2 shows implementation of the present invention in a Live Video Cloud (LVC) platform used for production, post-processing, and distribution of programs through streaming.
- LVC Live Video Cloud
- the LVC uses servers that comprise processors, which process logical entities that represent various components used in production of programs. These logical entities allow the LVC to be scalable such that the processing power can be tailored to meet various production needs based on user defined parameters as further described below.
- the LVC platform is managed by a system administrator who controls databases as well as backend and frontend servers that form a database management system (DBMS) over a cloud computing network that includes the Internet.
- DBMS database management system
- the LVC platform supports programmable streaming pipelines which allows for making flexible routing choices based on user defined parameters without changing the physical structure of the platform.
- the described LVC platform provides automated multi-media routing over various streaming formats, while supporting monitoring and alerting during production process in a scalable manner.
- the LVC can be used by production personnel, i.e., operators, assigned by owners or publishers of streams, who are given privileges to perform administrative function.
- the LVC is also used by contributors who are given privileges to contribute Signals used in programs, including owners or publishers of the programs.
- the administrative functions performed by the operators include creating such logical entities as Inputs, Streams, Routing and Outputs (as further described below) well as managing membership of contributors individually or in teams.
- Users of the LVC can be production personnel, i.e., operators, as well as contributors of Signals to the LVC.
- an operator can be a company doing a production for the owner or publisher of a programs.
- Productions can be created by the operators.
- the operators can assign Inputs and Outputs to a Production.
- Productions are constructs that provide structure, filtering by assigning a set of Inputs and Outputs, and the ability to start/stop Inputs and Outputs.
- the terms “component,” “system,” “platform,” “device,” “entity” and the like can refer to a computer entity or an entity related to an operational machine with one or more specific functionalities, such as encoder or transcoder functionalities as well as logical entities like Input, Output, Program Transformation, and Routing.
- the entities disclosed herein can be either hardware, a combination of hardware and software, software, or software in execution.
- Logical entities as used herein represent functionalities consumed or produced by logical activities. They are a representation of real- world objects within the system and as such they have relationships with many other entities within the system. The attributes of the entities reflect their purpose and usage.
- an entity or component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- an application running on a server having processors and the server or a processor can be a component or an entity.
- One or more components or entities may reside within a process and/or thread of execution and a component may be localized on one computer or server and/or distributed between two or more computers or servers. Also, these components or entities can execute from various computer readable media having various data structures stored thereon.
- the components or entities may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems).
- a signal having one or more data packets e.g., data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems.
- FIG. 2 shows Video Signals in various formats using different transport protocols, e.g., TS, WebRTC, RTMP and SRT, being routed as Streams to Ingest Servers over the Internet.
- transport protocols e.g., TS, WebRTC, RTMP and SRT
- FIG. 2 shows Video Signals in various formats using different transport protocols, e.g., TS, WebRTC, RTMP and SRT, being routed as Streams to Ingest Servers over the Internet.
- transport protocols e.g., TS, WebRTC, RTMP and SRT
- the LVC identifies various audiovisual object types using a universal unique identifier system known as EIDR, which identifies assets, owners/publishers, titles, edits, and collections, series, seasons, episodes, and clips.
- EIDR Electronic Programming Guides
- IPGs Interactive Programming Guides
- TS Streams that are routed to Ingest Servers include audio/video data and In-band (IB) metadata associated with movies, newscasts, TV and sport programs etc.
- IB In-band
- Metadata can represent any kind of signaling data for example, signaling graphical overlays, tickers, events in programs (such as a goal scored in a soccer match), precise timing of events (e.g. start of a program, a chapter, an advert, an ad break, etc.), asset IDs (e.g. EIDR, Ad-ID), closed captions, statistical data about a program, e.g., a game, etc. Metadata can also be used to signal a scene’s characteristics that could be recognized or otherwise measured based on some level of confidence, for example, using Artificial Intelligence (Al) or Machine Laming (ML).
- Al Artificial Intelligence
- ML Machine Laming
- AI/ML is used to recognize specified characteristics of audio/video data based on confidence levels. Once recognized, attributes or parameters associated with the specified characteristics can be used as metadata. For example, AI/ML can be used to analyze face of a person in a scene and generate metadata associated with confidence level for such analysis, e.g., a smiling face or frowning face etc.
- Input is defined as persistent logical entities by users configured and managed via graphical user interfaces or Application Programing Interfaces (APIs).
- APIs Application Programing Interfaces
- Inputs exist until explicitly deleted via a manual action by the operators.
- Inputs exist while there are no incoming Signals.
- Input may result in many Streams.
- Each Stream is associated with a session tag that creates a topical connection.
- Each session tag for Streams is created by owner/publisher of Stream, who has a unique identification (ID).
- ID unique identification
- the operators manage Streams using administration interfaces provided by the frontend server.
- the DBSM manages acceptance or admission of contributed Signals and enriching and routing of admitted or accepted outgoing Streams. All incoming Signals are aggregated.
- Ingest servers and the operators determine which Signals are to be abstracted to logical Streams before being redistributed and/or recorded and which Signals are to be discarded without logical abstractions to Streams.
- the operators can collect recorded Streams in a "Media Mix" container, typically a ZIP container, for re-use. Re-uses can include, for example, downloaded archives, use with third party editors, watch on demand after recording, and so forth.
- Ingest Servers read the recorded Streams from the Media Mix container and publish them to a content delivery network (CDN), which distributes them to viewers downstream.
- CDN content delivery network
- Ingest Servers allow the operators to manage accepted Streams and select the ones that are to be “on-air” based on configuration information retained by the DBMS, where the backend servers access the database to store and retrieve configuration parameters, IDs, attributes, predicates, routing rules, user profile data, etc. In this way, the operators can manage live sessions before accepted Streams are forwarded to Outputs for distribution.
- the frontend server displays all Inputs that are associated with a respective session tag to the operators, who can view and manage Stream acceptance and routing configuration parameters and user profiles, e.g., teams, groups, etc.
- the backend servers enable the operators to assign available Inputs to start a broadcast and trigger recording of live Streams using interfaces provided by the frontend server. In this way, the operators can copy links or embed codes to sessions for distribution.
- FIG. 3 shows a diagram that layers the physical and logical components of the EVC platform by abstraction as a logical layer bounded by two physical layers, namely an input physical layer and an output physical layer.
- the input physical layer shows input Sources of Signals, such as multi-media signals originated at input hardware Devices and/or Web Browser applications running on computing devices. Sources are entities that provide Signals which are applied to servers.
- the input physical layer also provides raw contributed Signals with IB metadata and/or inserted OOB metadata to a logical layer implemented by the DBSM for processing which implements such components as Inputs, Streams, Outputs, Routing and Routing Rules as logical entities in the LVC.
- the Signals are applied to logical abstractions herein called Inputs.
- An admission control component in the policy layer acts as a filter for accepting or rejecting contributed Signals based on user defined predicates.
- a predicate is a logical expression which evaluates to TRUE or FALSE. The evaluation directs the execution logics in processing codes. For example, a contributed Signal is accepted if the logical expression is evaluated to be True. In the same example, a Signal is rejected if the logical expression is evaluated to be FALSE. In another example, an opposite logic may apply as further described below. Signals are logically abstracted as a Streams based on user defined predicates which are used to determine if a Signal is rejected or accepted. Only accepted Signals are abstracted as Streams and rejected Signals are discarded without abstraction.
- the Streams are associated with IB/OOB metadata as well as Packetized Elementary Stream (PES), which is a specification in the MPEG-2 (ISO/IEC 13818-1) and ITU-T H.222.0.
- PES Packetized Elementary Stream
- the elementary stream is packetized by encapsulating sequential data bytes from the elementary stream inside PES packet headers.
- Streams representing accepted signals are subject to Routing based on Routing Rules that are part of a policy layer.
- the policy layer implements other user defined parameters, including Routing Rules, and Program Transformation.
- the Routing Rules can take into account IB/OOB metadata during streaming before the logic layer routes the accepted Streams to an Output. Routings of the accepted Streams are created based on re-evaluation by a Routing Re-evaluation Loop. The Routings are based on user defined Routing Rules before the accepted Streams are routed to Outputs.
- Outputs may select a subset of the PES within the underlying physical transport based on user defined Program Transformations.
- Each Program Transformation comprises logical rules associated with an Output which are evaluated when a multiplex of a Stream is generated.
- Program Transformation uses Program-specific information (PSI) containing metadata about a program.
- PSI includes PAT (Program Association Table) and PMT (Program Mapping Table) among others.
- a PMT defines a program by specifying multiple PES by their PID. For instance, the PMT may reference the PES at PID 100 for video and PID 101 for audio.
- One MPEG Signal may contain many PMTs which are listed in the PAT. According to the present invention, Program Transformations select an accepted Stream ahead of time or just in time.
- the program transformation HRST_VIDEO to PID 100 selects the first video just-in-time when a Program Mapping Table (PMT) is made available and distributes the first PES carrying a video signal on PID 100.
- Logical Outputs are processed by one or ore more processing units or processors, such as those used in one or more servers, in the output physical layer using logical rules associated with Outputs which are evaluated when a multiplex of a Stream is generated.
- the servers provide Signals that contain multiplex and PES for use by Content Delivery Networks (CDNs 1-4) or a Social Network.
- CDNs 1-4 Content Delivery Networks
- FIG. 4 shows a functional block diagram for attributes of a logical entity defined as an Input, including the owner of a Signal that is abstracted by the Input, End Points, Routing Rules, Predicates, State and Metadata. Inputs have states such as online, offline, error, unknown, etc.
- the Source can be a device an encoder, a web browser that contributes Signals via a public or private WebRTC or a transcoder.
- a Stream is representation of an accepted Signal that is active and identifiable by the Stream. Streams exist while accepted Signals are active.
- a Target/Destination is an entity that a Stream or Streams can be Routed to.
- Target/Destination can be an Output, a transcoder, a recorder with an inlet for receiving Stream or Streams.
- Target/destination can have an inlet for Streams used for multiple viewing.
- Target/destination can also be an Output, a transcoder or a recorder.
- the output of Source can be Input to a component of the LVC platform.
- Input can be viewed as implemented in the logic layer shown in FIG. 3.
- the LVC supports two types of Inputs: “push” Input and “pull” Input.
- a push Input listens for connections and let pushing Signals into the LVC platform.
- a pull Input requires active connection to Sources and pulls Signals into the LVC. Examples of Inputs are:
- Inputs are managed based on types by either an operator or a team/ group of operators, but not both. Certain Input types may not be managed.
- Inputs provides one or more URLs to which a Device (encoder, software, browser, etc.) can connect at Endpoints.
- An Endpoint is one end of a communication channel usually represented by the URL of a server or service. Endpoints become invalid when Inputs are switched off. Endpoints have a Time to Live (TTL) after which they expire.
- TTL may be implemented as a counter or timestamp that prevent packets from circulating indefinitely. TTL is not applied to certain Inputs such as stable RTMP, push URLs or hardware encoders.
- Endpoints can be dynamic in that they may not be known ahead of time. For example, an Input can generate Endpoints dynamically upon request.
- An Input can have one or more predicates. Predicates are functions in the form of /(Input, stream) Boolean (true or false). In one example, an Input does not accept a contributed Signal if any predicate is FALSE. In this example, Input accepts the contributed Signal only if all predicates are TRUE. Predicates can be configured by the operators via the user interface provided by the web portal or an API. For example, functions may be configured by operators so that: “Signals can be accepted during a certain time of day”, “Signals can be accepted if originated with a geofence or within a zone or location” or “Signals from a specific IP address can be accepted,” etc.
- the function geofence: (lat, Ing, d) (Input, stream) Boolean is a function producing predicates for a given latitude, longitude and distance. The function may be used to evaluate metadata of contributed Signals accordingly and reject all Signals that are not in proximity of a specific point of interest. If Predicates are FALSE, a Signal is dropped from Ingest without being abstracted as a Stream.
- the LVC platform may create predicates for internal use by the system administrator. For instance, the incoming connections for an Input can be limited using an internal predicate that allows only up to a maximum of N connections.
- the LVC platform supports user defined and automated Routings of routable entities, which adhere to various routing protocols.
- a Routing establishes a connection between an accepted Stream (or it's actual video Source) and a Target. Target is valid as long as Stream is valid. More specifically, Inputs can be routed based on a user defined Routing Rules.
- a Routing Rule is a definition attached to an Input, which is evaluated when a contributed Signal appears at the Input. The Routing Rules are re-evaluated periodically (e.g., each second) to determine the state of Streams. Routing Rules of an Input are re-evaluated when predicates change.
- User defined Routing Rules can be used to define a desired setup, such as:
- Streams can represent accepted Signals comprising conventional video, compressed data streams or data files as well as IB metadata sent with other components of the audiovisual content (video, audio, closed captions etc.).
- Inputs can define additional metadata attached to accepted Streams upon creation or definition as logical entities by the operators. Because metadata can change over time, Streams are re-evaluated when their metadata changes. In one embodiment, changes in states of Inputs also cause re- evaluation of routing relations.
- FIGs. 5 -12 show the process for creating Inputs in the LVC platform according to the present invention.
- the process begins in FIG. 5, when an operator who logs into the LVC platform shown in FIG. 2 selects an Input tab (not shown) via a user interface. Within the Input tab, the operator clicks on “New Input,” which presents a list of available Inputs such as Stream tag, RTMP Push, or SRT Push. In this example, the operator selects SRT Push.
- a configuration wizard that contains two tabs, namely “General” and” Setting”, is presented to the operator.
- FIG. 6 shows selection of the “General” tab, which allows the operator to enter generic information that would make the Input discoverable within the LVC platform.
- FIG. 5 shows the process for creating Inputs in the LVC platform according to the present invention. The process begins in FIG. 5, when an operator who logs into the LVC platform shown in FIG. 2 selects an Input tab (not shown) via a user interface. Within the Input tab, the operator click
- FIG. 7 shows selection of the “Setting” tab, which contains SRT Push specific settings.
- the LVC platform generates a URL for contributed Signal, which is unique to each Input.
- FIG. 8 shows a selected datacenter from a list where the URL is served from.
- FIG. 9 shows other configurations like the latency, which in this example is set at 50ms.
- FIG. 10 shows the predicates. Each Input may contain zero or more predicates.
- FIG. 11 shows creating a first predicate that accepts Streams only from a given IP address and a second predicate that requires the Streams to be accepted between 8pm and 10pm.
- the operator clicks the ‘Create’ button the LVC makes a request to relevant API to create a record for the created Input in the DBSM.
- Output Creation Outputs are the destinations of Streams. They are logical entities that may refer to one or more physical servers and many potential destinations like social websites or content delivery networks (CDNs) as explained in connection with FIG. 3. The physical destinations are grouped within one Output so that the same video processing, like standards conversion, may be applied to the Video Signal. Outputs may contain one or more Program Transformation. Program Transformations may be specified individually for each Target/Destination. Before generating a multiplex for a specific destination, the incoming PES as specified within the PMT are selected and assigned a specific Packet Identifier (PID). A user may therefore select which destination receives which specific set of Streams within one transport Stream.
- PID Packet Identifier
- FIGs. 13 - 18 show the process for creating Outputs in the EVC platform according to the present invention.
- FIG. 13 shows a portal for creating Output via a user interface, where a user can create a new Output from a list of different Outputs, such as Transcoded or Passthrough Outputs.
- the user selects to create a Passthrough Output.
- FIG. 14 shows selection of the “General” tab, which allows the operator to enter generic information that would make the Output discoverable within the EVC platform.
- FIG. 15 shows selection of the “Setting” tab, which allows a user to set a delay, frame rate resolution as well as interlacing mode.
- FIG. 16 shows selection of the “Program Transformations” tab, which allows an operator/user to set Transformation parameters.
- the operator may specify program transformations for the output as a whole or individually for each destination.
- Program Transformations are individual rules applied to the PES within one PMT. They select a PES either known ahead of time or just in time. Program Transformations have a mode that is either “absolute”, “relative” or “any” and an action that is either “absolute”, “keep” or “drop”.
- Program Transformations that are absolute and any are known ahead of time as they do not pose a requirement of knowledge to a PMT The absolute rule selects a PES by its PID.
- the any rule selects any PES.
- Relative rules have the form n-th of a kind, like “first video” or “second audio”. These rules select the n-th PES of a time as it appears within the PMT. Relative rules require therefore the presence of a transport stream with a PMT to be evaluated. Program Transformations can only be resolved and evaluated once a Stream is available. It is impossible to know whether two Program Transformations have a conflict since the presence of all PESs is only known once a PMT is known. However, an operator may define conflicting rules on purpose to normalize different Streams.
- PID 100 to PID 100 and PID 101 to PID 100 would normalize any Stream that has a packetized elementary stream defined for either PID 100 or PID 101 and make this feed available on PID 100. Such a rule would only pose a conflict if a Source would advertise both PID 100 and PID 101. Program Transformations can therefore only be evaluated and realized once a Stream is routed to an Output and multiplexed for its destination.
- Program Transformations are evaluated in an order from most specific to least specific. Absolute transformations are evaluated before relative transformations, which are evaluated before any transformations. Within these categories the transformations are evaluated based on their action from more specific, being absolute, to less specific, being drop. Program Transformations are materialized once a PMT is known. In this process, all relative Transformations become absolute by selecting the PES for the n-th index in a relative Transformation. Once all absolute Transformations are known conflict resolution may occur.
- mappings that are effectively the same are merged where conflicting actions choose the positive outcome (keep wins over drop) and otherwise conflicting mappings are reported as an error to the user (for example, the mappings PID 100 to PID 200 and PID 101 to PID 200 are conflicting if both PID 100 and PID 200 exist in the Video Signal).
- Program Transformation parameters are user defined according to requirements of a program under production using the LVC.
- Distributed systems that abstract the underlying infrastructure may induce additional latency to the user of the system.
- the professional distribution of Signals requires these Signal to be delivered at a constant bitrate so that the receiving hardware decoders do not fail by, for instance, receiving too many packets within an instant that would overflow an internal buffer and therefore dropping packets which will result ultimately in packet loss and visual artifacts.
- Distributing signals over wide area networks requires typically a form of buffering so that the packets which are received may be paced at a constant discrete time interval. Without such a buffer any party forwarding a signal would only be able to forward a signal at the pace it is receiving the signal which is subject to network jitter. The size of the packet buffer translates to a linear increase in latency.
- LVC does not require buffering as necessary for achieving a constant bitrate up until the Output is sending a signal to its final destination.
- the inter arrival time (IAT) of packets is recorded at the ingest server and preserved throughout the system up until the Output. At this point the IAT of the individual packets may be reproduced since it was stored as additional metadata with the packets of the transport stream.
- FIGs. 17 and 18 shows selection of the “Destination” tab, which allows a user to set destination parameters.
- FIG. 17 shows specifying multiple destinations, e.g., RTMP and SRT destinations. The same Stream routed to the Output is multiplexed to these destinations.
- FIG. 18 show that when the Operator clicks the ‘Create’ button the Live Video Cloud UI makes a request to the Live Video Cloud API to persist the Output record in the database.
- the Live Video Cloud API validates the data for correctness and persists a valid Output in the database with records that have unique identifiers for the created outputs.
- FIG. 19 shows attributes that define Streams.
- a Stream can be associated with an accepted Video Signal, referenced by an identified URL, as well as metadata.
- the metadata can be part of the Stream (i.e., IB metadata) or provided from a different Source, where other components of the system can read a Stream metadata without knowledge of binary format of a used video codec like H.264.
- Streams also contain a session identifier used to identify contributed Signals of the same publisher or owner during reconnects.
- UUIDs universally unique identifiers
- the session identifier can be defined and/or created by an Input or can be given during validation by an Ingest Server. Pull Inputs generate the session identifier by themselves.
- User A opens the landing page for a public WebRTC push Input, a session ID is assigned to the WebRTC Stream.
- User A is contributor of WebRTC Stream, which can for example be an accepted Video Signal showing the contributor.
- the LVC platform tracks the session ID until Stream A iis associated with an Input, which is configured by an operator to Route Stream Ai to Output ⁇ User A experiences a short connection loss and the stream is terminated.
- a browser client can re-establish the connection, resulting in a new stream Stream A2 . Since the session ID is stable across reconnects, the routing relation to Output can be evaluated and Stream A2 will be routed to the Outputi.
- One Stream can reference one Input and multiple Streams can reference the same Input.
- Each Stream defines at least one Stream URL, but there can be more than one URL for a Stream.
- Stream URLs have a mime-type attribute. There can be more than one Stream URL for the same mime-type.
- the identity of publisher/owner of a contributed Stream corresponding to an accepted Signal can be known. If the publisher is known, a valid and unique publisher/owner ID is referenced in the DBMS. Some contributors of Streams may be unknown.
- a Stream corresponding to an accepted Signal contains additional metadata like it's audio and video codec, framerate and possibly a location.
- This metadata can change over the lifetime of the Stream and is stored with a timestamp relative to the creation of the stream in the DBMS.
- Metadata changes can be incremental. That is, when the value of metadata key x changes at one point in time t, for different metadata — e.g. key y , key z , — the latest value is considered present at t.
- FIG. 20 shows implementation of Routing Rules.
- Routing Rules define rules applied once a contributed Signal is accepted and abstracted as a Stream.
- a Routing Rule can cause multiple Routings Rules based on corresponding predicates. Multiple Routing Rules for the same Target can be applied to a single Input.
- a Routing Rule can be considered as a function f (Stream) Routing, which results in a Routing for a given Stream abstracted for an accepted Signal. Routing Rules bring about the desired state of the physical routings that should be manifested by enforcing the policy layer shown in FIG. 3.
- each Source can be uniquely identifiable. Multiple Routing Rules can reference the same Source. In one embodiment, Routing Rules reference Targets.
- One Routing Rule can contain many predicates.
- the predicates are functions of the form /(rule, stream) Boolean (TRUE or FALSE).
- the Routing Rules do not produce Routings if any predicate’s specified Boolean condition (TRUE or FALSE) is not satisfied
- FIG. 21 shows Routing when connecting a Source of a Stream to a Target.
- the Target is an arbitrary location within or outside of the LVC platform, but not necessarily an Output.
- the Target may be a component that records accepted Streams or enriches a Stream based on metadata, for example, Routing based on AI/ML determined parameters, such as analysis of confidence levels in defined characteristics of Streams.
- Sources of Signals can be Ingest Servers
- Targets may be Output, tiler engines, switches, routers or internal nodes, like a recorder. Routings can be created automatically when Streams appear. Routings can also be created by components in the LVC platform. In one embodiment, Outputs are immutable, i.e., they do not get updated after creation. Routings can be edited or deleted by a user after creation.
- Routings reference Sources with unique identifiers of Streams or internal or external URLs. For example, Source Stream UUID //95ac51c7-9afl-497b-b542-07be6458b071 or Source URL rtmp://ingest.live/_/ABC123. Routings also reference a determinable Target with a unique identifier or a URL.
- Target UUID //95ac51c7-9afl-497b-b542- 07be6458b071 or Target videoengine://23ac51c7-9al l-497b-b542-07be6458b071, which can for example be a tiler Target for Source Stream UUID //95ac51c7-9afl-497b-b542- 07be6458b071.
- Routings are resolved by resolving Sources and Targets.
- a Source reference can for example be resolvable to a URL or an Ingest Server.
- Streams UUIDs can be ultimately resolvable to the URL or the Ingest server.
- a URL is already considered a resolved Source.
- a Target reference is also resolvable to a URL.
- UUIDs are ultimately resolvable to:
- Routing Targets can reference a Production, which can change during the lifetime of Routings.
- Example'. Output foo can reference Production,,,,, as long as it is attached to that Production.
- Streams can be routed to Targets once they appear.
- An example for such Routing is a Stream that is routed to a tiler (Selector) application when it appears first.
- An operator may define automation for routings based on a criterion. The DBMS will evaluate whether or not a Stream matches the criteria and creates a Routing to a Target, which may be dynamically chosen.
- Stream B is defined as a backup for Stream A . Once Stream A ceases to exist for example, due to a connection loss, Stream B is automatically routed to Target of Stream A .
- Another example could be an operator define a rule to route the Stream with the highest confidence score for a metadata derived from applying AI/ML, such as a keyword like "smile”. Applying AI/ML facial recognition, metadata from a stream that contains a confidence score for features of a face (e.g., smiling) is used for Routing based on whether the confidence passes a threshold.
- the operators can also Route Streams to Targets by performing actions via the graphical user interface (or using an LVC API).
- the operator selects a Stream and routs it to an Output using a presented user interface. Routing Protocol
- Targets implement Routing protocols.
- a Target may also be a Source.
- the routing protocol defined using gRPC is as follows: service Routing Target ⁇
- a Routing Rule is entered from Input to Output for the Stream's unique session identifier.
- the existing Routing Rule for example, from a different Input, is removed.
- Routing Rules can involve testing for all predicates, building a set of Targets for a given Routing Rule and mapping Targets to associated Routing. Routing Rules are evaluated when any Routing Rule, Target of a Routing Rule or an Input is created, updated or deleted regardless of the state of the Input (online or offline for instance). Re- evaluation can occur when:
- routing rules are removed from an Input.
- the DBMS will therefore perform a reevaluation of the current Routings and evaluation of new Routings by performing the following:
- the following re-evaluation algorithm can be used to map from the current state or Routings to the desired state of Routings:
- Input A has a rule to Route accepted Streams to Production A
- Production A creates Routings for incoming connections to Tiler A
- implicit Routings for any incoming Stream to Production are created based on criteria (1). These Routings will receive the UUID of Production as their context. When the Stream is routed to Production, a service receiving the routing will create a new routing for the Stream to Tiler A based on criteria (2). No context information is attached as the Routing is no longer a Routing because it is not created by an Input.
- re-evaluation occurs based on criteria (3).
- corresponding Routings from Production to tiler are removed when they are removed from the Production.
- No Routing to the tiler is removed if the corresponding Routing stays active for the Production by ignoring the Routings created by the Production to the tiler.
- the Routings that are not directly created by the Input are ignored during re-evaluation. Instead, it is assumed that the service handling the Routings for the Production is also disconnecting corresponding Streams from the tiler when the service determines that a Stream is disconnected.
- connect is an idempotent operation, i.e., an operation that can be executed an unlimited number of times and the state will be the same, and therefore would not create the same Routing twice.
- Routing Rule of Target can reference a Production indirectly. This reference can change during the lifetime of the routing rule.
- the Output foo references the Production foo as long as it is attached to that Production and the routing_rule productiOI1 becomes associated with Production ⁇ .
- the LVC platform supports the following non-exhaustive Input type examples:
- RTMP Push Input defines an Endpoint known ahead of time for a single contributed Stream under a 1: 1 relationship.
- the Endpoint has a form like rtmp://ingest.live/i/XYZ123.
- RTMP Push Input can be an external Device owned by either a user or a team.
- RTMP Push Inputs do not allow Streams to replace the current Video Signal if a Stream is currently considered active for the same session ID.
- RTMP Push Inputs can support predicates for accepting a Signal based on an IP range.
- Public WebRTC Push can define both a WebRTC Endpoint as well as a landing page which contributors can use to start streaming, for example, under a 1: N relationship as the associated Input could result in many Streams.
- Public WebRTC Inputs can be owned by a team of contributors, but not a single contributor.
- Public WebRTC Inputs support predicates for accepting a Signal, for example, based on the current location of the contributors.
- Private and public WebRTC push can potentially be the same Input type.
- Private WebRTC push could define an additional predicate, for example, a contributor must be present and a member of a certain team.
- HLS Pull Input is owned by a team, but not a single user.
- Streams can be recorded. Whether a Stream is recorded may change during its lifetime. Streams that have not been recorded are lost and not recoverable.
- a Stream’s URL can be selected based on a mime-type.
- One implementation of the LVC platform builds a subset of Stream URLs for a given mime-type. If the resulting set contains more than one entry, the result is chosen randomly using round-robin load balancing.
- EIG. 22 shows example of a flow diagram amongst entities of the LVC platform.
- an external actor named Device inside a cloud connects to an Ingest server (1), which validates the connection via its Input (2).
- Input e.g., a WebRTC Push Input
- the Input e.g., a WebRTC Push Input
- routing relations are evaluated (4), three Routings are applied to the accepted Stream (5), namely, Routing to an Output, a Production or a Recorder.
- FIG. 23 shows a flow chart that exemplifies implementing an embodiment of the invention in a method implemented by logic in the LVC platform.
- the LVC platform provides log in interfaces for operators of publishes/owners of programs, who are identified by universally unique publisher IDs. The operators create and manage individual or groups/teams who contribute public or private Streams. The operators create Inputs that identify Streams and associate predicate logic for accepting or rejecting the contributed Signals.
- An Admission Control process accepts contributed Signals based on Input logic that identify such Signals based on user/operator defined predicates. A decision logic checks an admission control predicate to determine whether it is True or False based on the user defined predicates. Based on the determination, contributed Signals are rejected or accepted.
- One logic may accept Signals if the associated predicate is TRUE and reject Signals if the associated predicate is FALSE.
- Opposite logic may accept Signals if the associated predicate is FALSE and reject Signals if the associated predicate is TRUE.
- the logical choices are made by the operators who access the LVC platform. Such operators also specify Routings for Inputs based on Routing Rule applied to accepted Streams. In one example, the operators specify predicates for Routing from Sources to Destinations. All operator specified or otherwise created Inputs, Routings and Input and Routing predicates as well as Program Transformation parameters are stored in the DBSM shown in FIG. 2. The predicates are checked by retrieving the predicates and applying them to the admission control and routing logical processes for routing accepted Streams based on Routing Rules to Outputs based on Program Transformations.
- implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof.
- ASICs application specific integrated circuits
- These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one Input device, and at least one output device.
- the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide Input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- a keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and Input from the user can be received in any form, including acoustic, speech, or tactile Input.
- the systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components.
- the components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN”), a wide area network (“WAN”), and the Internet.
- LAN local area network
- WAN wide area network
- the Internet the global information network
- the computing system can include scalable servers for example implemented in the clouds comprising a plurality of scalable processors that execute the logical entities, e.g., Inputs, Streams, Routings and Outputs described above in order to meet a wide array of program production needs. For example, as the number of contributes Signals increase, more p [processors can be deployed to implement the described logical entities.
- a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2021/030421 WO2023022697A1 (en) | 2021-08-16 | 2021-08-16 | Scalable system and method using logical entities for production of programs that use multi-media signals |
CA3223643A CA3223643A1 (en) | 2021-08-16 | 2021-08-16 | Scalable system and method using logical entities for production of programs that use multi-media signals |
EP21953611.7A EP4359966A1 (en) | 2021-08-16 | 2021-08-16 | Scalable system and method using logical entities for production of programs that use multi-media signals |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2021/030421 WO2023022697A1 (en) | 2021-08-16 | 2021-08-16 | Scalable system and method using logical entities for production of programs that use multi-media signals |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023022697A1 true WO2023022697A1 (en) | 2023-02-23 |
Family
ID=85239707
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2021/030421 WO2023022697A1 (en) | 2021-08-16 | 2021-08-16 | Scalable system and method using logical entities for production of programs that use multi-media signals |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4359966A1 (en) |
CA (1) | CA3223643A1 (en) |
WO (1) | WO2023022697A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100020794A1 (en) * | 2007-05-29 | 2010-01-28 | Chris Cholas | Methods and apparatus for using tuners efficiently for delivering one or more programs |
US20190222619A1 (en) * | 2015-05-14 | 2019-07-18 | Web Spark Ltd. | System and Method for Streaming Content from Multiple Servers |
-
2021
- 2021-08-16 WO PCT/US2021/030421 patent/WO2023022697A1/en active Application Filing
- 2021-08-16 EP EP21953611.7A patent/EP4359966A1/en active Pending
- 2021-08-16 CA CA3223643A patent/CA3223643A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100020794A1 (en) * | 2007-05-29 | 2010-01-28 | Chris Cholas | Methods and apparatus for using tuners efficiently for delivering one or more programs |
US20190222619A1 (en) * | 2015-05-14 | 2019-07-18 | Web Spark Ltd. | System and Method for Streaming Content from Multiple Servers |
Also Published As
Publication number | Publication date |
---|---|
CA3223643A1 (en) | 2023-02-23 |
EP4359966A1 (en) | 2024-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11962835B2 (en) | Synchronizing internet (over the top) video streams for simultaneous feedback | |
US10609447B2 (en) | Method of unscrambling television content on a bandwidth | |
US9503765B2 (en) | Averting ad skipping in adaptive bit rate systems | |
US9240212B2 (en) | Synchronized viewing of media content | |
JP2017517167A (en) | Targeted ad insertion for streaming media data | |
RU2656093C2 (en) | Content supply device, content supply method, program, terminal device and content supply system | |
US11622136B2 (en) | System and method for providing a customized manifest representing a video channel | |
US11838204B2 (en) | Scalable system and method that use logical entities for production of programs that use multi-media signals | |
EP3891999B1 (en) | Just after broadcast media content | |
Li et al. | Cloud-based video streaming services: A survey | |
US11876851B2 (en) | Synchronizing independent media and data streams using media stream synchronization points | |
WO2023022697A1 (en) | Scalable system and method using logical entities for production of programs that use multi-media signals | |
US10715560B1 (en) | Custom traffic tagging on the control plane backend | |
US20230008021A1 (en) | Synchronizing independent media and data streams using media stream synchronization points | |
Park et al. | A Study on Video Stream Synchronization from Multi-Source to Multi-Screen | |
Salehi et al. | Applications of Multimedia Clouds | |
Pérez et al. | Multi-vendor video headend convergence solution | |
WO2023014783A1 (en) | Synchronizing independent media and data streams using media stream synchronization points | |
Stellpflug et al. | Remote Content Access and the Rise of the Second Screen | |
Hersly | Smarter Workflows for Multi-platform Delivery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21953611 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 3223643 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021953611 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2021953611 Country of ref document: EP Effective date: 20240124 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |