US20140143820A1 - Internet-Based Video Delivery System - Google Patents

Internet-Based Video Delivery System Download PDF

Info

Publication number
US20140143820A1
US20140143820A1 US13/796,053 US201313796053A US2014143820A1 US 20140143820 A1 US20140143820 A1 US 20140143820A1 US 201313796053 A US201313796053 A US 201313796053A US 2014143820 A1 US2014143820 A1 US 2014143820A1
Authority
US
United States
Prior art keywords
packets
overlay network
overlay
original
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/796,053
Inventor
James W. Akimchuk, III
Leigh Willis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
VIDEOLINK LLC
Original Assignee
VIDEOLINK LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VIDEOLINK LLC filed Critical VIDEOLINK LLC
Priority to US13/796,053 priority Critical patent/US20140143820A1/en
Assigned to VIDEOLINK LLC reassignment VIDEOLINK LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AKIMCHUK, JAMES W., III, WILLIS, LEIGH
Priority to PCT/US2013/044471 priority patent/WO2014077899A1/en
Publication of US20140143820A1 publication Critical patent/US20140143820A1/en
Priority to US14/831,084 priority patent/US9661373B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42204User interfaces specially adapted for controlling a client device through a remote control device; Remote control devices therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/765Media network packet handling intermediate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2221Secondary servers, e.g. proxy server, cable television Head-end being a cable television head-end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4622Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/64322IP
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/6473Monitoring network processes errors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64746Control signals issued by the network directed to the server or the client
    • H04N21/64761Control signals issued by the network directed to the server or the client directed to the server
    • H04N21/64776Control signals issued by the network directed to the server or the client directed to the server for requesting retransmission, e.g. of data packets lost or corrupted during transmission from server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet

Definitions

  • remote control studios have been created.
  • a camera there may be a camera, a zoom lens system, pan/tilt capability, an audio package, studio lighting package and, in some cases, an interruptible fold back system to allow the experts to hear questions from an interviewer.
  • a TV monitor, a VCR or DVD player may also be present.
  • a backdrop system can be added using a large television or video monitor. Different images may be displayed on the screen to provide different backdrops, including daytime and nighttime city skylines and company logos. These backdrops help give the remote studio a more professional look on air and are an advancement over the more conventional backgrounds previously used.
  • the video feed travels through a TV1, 270 Mb or 1.5 Gb fiber optic circuit to the long distance video carrier POP.
  • the signal travels via fiber optic cable to the technical operations center, although satellite transmission is also possible.
  • the communication infrastructure required to transmit the video feed from the remote studio to the control location may be expensive.
  • the fiber-based long distance transmission model involves a high installation cost, high monthly recurring cost and modest per-usage cost.
  • control of the camera and studio is typically at a location different from that receiving the live video feed.
  • This control location may have dedicated equipment in order to control the camera, which may be very specialized.
  • such equipment may only be able to control one camera at a time. Therefore, to control two cameras simultaneously, it may be necessary to have two complete sets of control equipment.
  • the problems of the prior art are addressed by the present system and method for delivering content over the Internet from a content source to a destination.
  • the system includes the use of an overlay network, built on the underlying IP network. Additionally, error correction capability is added to the overlay network to allow the destination to reconstitute packets lost during the transmission over the Internet.
  • an overlay network is created using a plurality of overlay nodes, which may be geographically distributed, either throughout the country or throughout the world. Each respective content source or destination is also a part of the overlay network.
  • Overhead capable of providing error correction capability, is added to the content flow as it enters the overlay network. As the content flow leaves the overlay network, this overhead is removed from the content flow. In the case of transmission errors across the overlay network, the overhead information is used to reconstitute lost packets.
  • FIG. 1 represents an overlay network in accordance with one embodiment
  • FIG. 2 shows a schematic view of the functions performed by the transmitting appliance in the present invention
  • FIG. 3 shows an exemplary representation of the hardware used to construct the transmitting appliance of FIG. 2 ;
  • FIG. 4 shows a schematic view of the functions performed by the receiving appliance in the present invention.
  • FIG. 5 shows a sample transmission across the overlay network.
  • FIG. 1 shows a representative overlay network 10 in accordance with one embodiment.
  • the overlay network includes a number of nodes 100 a - e , which may be geographically dispensed across the transmission region of interest. While FIG. 1 shows the overlay networks located in the United States, the invention is not limited to this embodiment. For example, one or more nodes may be located in other countries. Additionally, there is no upper limit to the number of overlay nodes that may be included in the overlay network.
  • the overlay network serves to create reliable or semi-reliable transmissions across the underlying IP network.
  • This overlay network may be a message-oriented overlay network (MOON), although other types of overlay networks may also be used.
  • the overlay network may be based on the implementation known as SPINES, available from www.spines.org.
  • the MOON may be based on Resilient Overlay Network (RON), available from the Massachusetts Institute of Technology.
  • RON Resilient Overlay Network
  • each overlay node is a general purpose computer, executing the SPINES software and having memory elements and one or more network interfaces.
  • the SPINES software is resident in the memory elements, and is executing by a processing unit in the general purpose computer.
  • the processing unit may be any suitable processor, multi-core processor, or may be a plurality of processors.
  • the purpose of the overlay network is to reduce the time needed to re-transmit dropped packets.
  • the sequence numbers are only monitored by the source and destination of a transmission on the Internet. Thus, if a destination in New York determines that a packet was dropped when receiving a content flow from a source in California, the node in New York must request retransmission of this packet. This entails sending the request through multiple nodes until it reaches the original source. By using an overlay network, this delay can be reduced.
  • each overlay node tracks sequence numbers and dropped packets. Thus, upon detection of a dropped packet, an overlay node can request retransmission from the overlay node which it received the packet from. Since these overlay nodes are closer together, the time to discover the error and request and receive a retransmission is much reduced.
  • FIG. 2 shows a representative transmitting appliance 200 which may be used in accordance with one embodiment.
  • the appliance 200 may be in communication with a video encoder 250 via an encoder interface 210 .
  • the encoder 250 encodes baseband video into an MPEG format, such as but not limited to MPEG-TS. This content flow is then divided into packets, which are sent to the transmitting appliance 200 .
  • the MPEG-TS packets are transmitted via UDP. In other embodiments, a different network protocol is used to transmit the packets.
  • the encoder 250 and the transmitting appliance 200 are shown as separate components, in some embodiments, these two elements may exist within one physical component.
  • the transmitting appliance 200 receives packets from the encoder 250 via the encoder interface 210 .
  • a predetermined number of redundant packets are created.
  • This coding block 220 receives a group of packets, where the size of the group is predetermined, but programmable. It then uses this group of packets to create a set of redundant packets, where the number of redundant packets is selectable by the user or by the transmitting appliance 200 .
  • the coding block 220 may accept ten packets, and create five redundant packets based on the information in these ten packets. The purpose of this redundancy is to create a mechanism by which the original ten packets can be reconstituted, even if those original ten packets do not reach the destination.
  • the coding block uses Forward Erasure Correction (FEC) algorithms to create the redundant packets.
  • FEC Forward Erasure Correction
  • One such embodiment may be found at www.openFEC.org, although other embodiments may also be used.
  • the LDPC Staircase codec may be utilized.
  • the resulting set of packets (where a set is defined as the original group plus the redundant packets) is then moved to the packet formation block 230 for further processing.
  • the packet formation block 230 generates a header for each packet in the set.
  • the header may contain various information. For example, in one embodiment, the header includes packet sequencing information, transmission identification information, and the FEC ratio (i.e. the ratio of the redundant packets to the group size). The header is appended to each packet.
  • the packet formation block 230 contacts the transmission block 240 using a standard API.
  • the transmission block 240 executes the SPINES software and the API provided by the SPINES software. The transmission block 240 then forwards the packets across the overlay network.
  • the transmitting appliance 200 is an additional node in the overlay network shown in FIG. 1 .
  • any transmitting appliance 200 (of which there may be many, depending on the number of remote content generation sites) is an end node for the overlay network of FIG. 1 .
  • FIG. 2 shows the transmitting appliance 200 subdivided into four blocks 210 - 240 . This division is for illustration purposes only, and the invention is not required to partition the functions performed by the transmitting appliance 200 in this way.
  • the transmitting appliance 200 may be a general purpose computing element, having a processing unit 201 , associated memory elements 202 and one or more Network Interface Cards (NICs) 203 .
  • the associated memory 202 may be in the form of semiconductor memory, magnetic memory (such as disk drives) or optical memory (such as CDROMSO.
  • the semiconductor memory may be volatile, such as RAM, non-volatile, such as ROM, or a re-writable non-volatile memory, such as FLASH EPROM.
  • the instructions needed to perform the functions described above are stored in the memory elements associated with the processing unit 201 .
  • the processing unit 201 may be a single processor, a multi-core processor, or may be a plurality of processors.
  • the transmitting appliance 200 may also include other hardware, such as dedicated hardware to perform the coding (such as FEC) algorithm, or DMA (direct memory access) machines to facilitate movement of data through the appliance 200 .
  • the coding algorithm can also be performed using a software implementation running on the processing unit 201 .
  • data is received from the encoder 250 via NIC 203 , processed in the memory 202 by the processing unit 201 , and passed out to the overlay network via the NIC 203 .
  • the transmitting appliance 200 may be constructed in a variety of different ways.
  • the transmitting appliance 200 may be a special purpose device, constructed specifically for this task. It may also be a traditional PC (personal computer), running the software needed to perform these functions.
  • the appliance may also be integrated into third party encoding or decoding equipment to allow for compatibility with the overlay network.
  • the transmission block 240 transmits the packets to the overlay network shown in FIG. 1 .
  • a receiving appliance is disposed.
  • the receiving appliance is a node which is part of the overlay network shown in FIG. 1 .
  • a representative functional diagram of a receiving appliance 300 is shown in FIG. 4 .
  • the data from the overlay network enters the receiving appliance 300 and is received by the input block 310 .
  • This input block 310 may execute the same software found in the transmission block 240 of the transmitting appliance 200 .
  • this software is the SPINES or RON software described above.
  • the input block 310 passes the received packets to the header deconstruction block 320 , which serves to read the header of every packet to determine the transmission ID, the FEC ratio and current sequence number of the incoming stream.
  • the header deconstruction block 320 uses this to detect changes in the stream in the event of stop/start of the transmitter, and is therefore also able to handle changes in FEC ratio.
  • the header deconstruction block 320 waits to receive a set of packets (defined as a group of original packets and the redundant packets), or until a sufficient time has elapsed such that any missing packets are assumed to be lost.
  • the set of packets is then passed to the coding block 330 , which accepts the set of packets (or the received packets from a set).
  • the coding block 330 may then reconstruct any missing or corrupted original packets, using the information available in the set of packets. If all original packets were properly received, the coding block 330 can simply discard the redundant packets. However, in all other cases, the redundant packets are used to reconstruct the missing original packets.
  • the packets, which define the original group transmitted by video encoder 250 to the transmitting appliance 200 are then forwarded to the output block 340 .
  • the output block 340 forwards the group of packets to a video decoder 350 . This transmission may be using UDP or some other network protocol.
  • the video decoder 250 may be part of the same physical component as the receiving appliance 300 , or may be a separate device. Furthermore, the underlying architecture of the receiving appliance 300 may be similar to that of the transmitting appliance 200 , as shown in FIG. 3 .
  • a transmitting appliance is located at the content provider 400 in Oregon. This transmitting appliance receives a video stream from a video encoder, as described above.
  • the transmitting appliance is a node on the overlay network and makes the video feed available using a protocol such as multicast.
  • a user determines that they desire to view the video stream being sourced at the content provider 400 in Oregon.
  • the receiving appliance located at the destination 401 , requests that multicast group from its localhost daemon. That daemon then determines the best route to the source, using the overlay nodes. In this example, the daemon selects a route that utilizes overlay nodes 100 e, 100 b and 100 a.
  • the transmitting appliance transmits packets over the overlay network, after performing all of the steps and functions described above.
  • the individual packets each have headers, and the redundant packets have been created and are transmitted with the original group of packets.
  • This stream reaches overlay node 100 a, which checks to make sure that all packets arrive. This may be done by checking sequence numbers or some other similar mechanism. Any packets that do not arrive are requested again by overlay node 100 a.
  • the overlay node may have sufficient buffering so that it can receive packets out of order and keep track of which packets are missing. In this way, unnecessary retransmissions are minimized.
  • the overlay node 100 a does not provide any of the coding functions described above.
  • the overlay node 100 a executed the software needed to create the overlay network and forwards the packets to the next overlay node 100 b.
  • the overlay node 100 b performs the same checks to insure that all of the packets have been received and forwards the packets (including the original group of packets and the redundant packets) to overlay node 100 e.
  • Overlay node 100 e performs the same functions as the other overlay nodes 100 a, 100 b and forwards the packets to the destination 401 .
  • the receiving appliance located at the destination 401 is also part of the overlay network, so it is able to request retransmissions if it determines that one or more packets were lost.
  • the receiving appliance then performs the functions described above, where it strips off the headers, reorders the packets, and reconstructs any lost original packets.
  • the original group of packets is then delivered, such as by using UDP, to the video decoder.
  • This configuration provides the reliability improvement inherent in an overlay network. This improvement is further augmented by using FEC (or some other forward erasure or error coding) to further protect against dropped or lost packets, thus offering a robust, low cost mechanism to transmit time sensitive information, such as a video stream across the internet.
  • FEC forward erasure or error coding
  • this Internet-based video delivery system can be combined with the remote controlled studio camera system, as disclosed in copending U.S. Patent Publication 2012/0212609.
  • the first subsystem described in that application may include the transmitting appliance described herein.
  • the other subsystems that receive a video stream from the first subsystem, such as but not limited to the fourth subsystem, may include a receiving appliance as described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A system and method for delivering content over the Internet from a content source to a destination is disclosed which includes the use of an overlay network, built on an underlying IP network. Additionally, error correction capability is added to the overlay network to allow the destination to reconstitute packets lost during the transmission over the Internet. In one embodiment, an overlay network is created using a plurality of overlay nodes, which may be geographically distributed. Each respective content source or destination is also a part of the overlay network. Overhead, capable of providing error correction capability, is added to the content flow as it enters the overlay network. As the content flow leaves the overlay network, this overhead is removed from the content flow. In the case of transmission errors across the overlay network, the overhead information is used to reconstitute lost or corrupted packets.

Description

  • This application claims priority of U.S. Provisional Patent Application Ser. No. 61/728,012, filed Nov. 19, 2012, the disclosure of which is incorporated herein by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • The flood of news, financial, political and other informational television programs has generated an ever increasing demand to utilize on-air experts, such as investment bankers, lawyers, and politicians. The presence of these experts adds credibility and an in-depth analysis of a given topic that is not otherwise possible.
  • Traditionally, for this interview to occur, the expert is forced to travel to the television studio of the television show that is interested in interviewing this expert. This involves costs for the television program, and an inconvenience for the expert. For example, the expert would have to travel to the studio, where they are prepared for the interview through hair and makeup, and appear on camera. They then travel back to their office. Often, experts appear without monetary compensation, as the publicity associated with being on-air is considered compensation. For many corporations, the publicity is not worth the lost time and expense associated with visiting a studio. In addition, such an arrangement does not allow for real-time analysis of time-sensitive events, such as breaking news, corporate mergers, or political reaction, as the experts need time and sufficient notice to travel to the studio.
  • To solve this problem, remote control studios have been created. In such a studio, there may be a camera, a zoom lens system, pan/tilt capability, an audio package, studio lighting package and, in some cases, an interruptible fold back system to allow the experts to hear questions from an interviewer. In some cases, a TV monitor, a VCR or DVD player may also be present. As a further enhancement, a backdrop system can be added using a large television or video monitor. Different images may be displayed on the screen to provide different backdrops, including daytime and nighttime city skylines and company logos. These backdrops help give the remote studio a more professional look on air and are an advancement over the more conventional backgrounds previously used.
  • Traditionally, in the case of a remote control studio, the video feed travels through a TV1, 270 Mb or 1.5 Gb fiber optic circuit to the long distance video carrier POP. Typically, the signal travels via fiber optic cable to the technical operations center, although satellite transmission is also possible. The communication infrastructure required to transmit the video feed from the remote studio to the control location may be expensive. The fiber-based long distance transmission model involves a high installation cost, high monthly recurring cost and modest per-usage cost.
  • In addition, the control of the camera and studio is typically at a location different from that receiving the live video feed. This control location may have dedicated equipment in order to control the camera, which may be very specialized. In addition, such equipment may only be able to control one camera at a time. Therefore, to control two cameras simultaneously, it may be necessary to have two complete sets of control equipment.
  • The issues associated with remote camera control are addressed in copending U.S. Patent Application Publication No. 2012/0212609, which is incorporated by reference in its entirety. However, it would also be advantageous if less expensive means were available to deliver the video stream from the remote studio to the distribution site in a reliable way.
  • SUMMARY OF THE INVENTION
  • The problems of the prior art are addressed by the present system and method for delivering content over the Internet from a content source to a destination. The system includes the use of an overlay network, built on the underlying IP network. Additionally, error correction capability is added to the overlay network to allow the destination to reconstitute packets lost during the transmission over the Internet. In one embodiment, an overlay network is created using a plurality of overlay nodes, which may be geographically distributed, either throughout the country or throughout the world. Each respective content source or destination is also a part of the overlay network. Overhead, capable of providing error correction capability, is added to the content flow as it enters the overlay network. As the content flow leaves the overlay network, this overhead is removed from the content flow. In the case of transmission errors across the overlay network, the overhead information is used to reconstitute lost packets.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 represents an overlay network in accordance with one embodiment;
  • FIG. 2 shows a schematic view of the functions performed by the transmitting appliance in the present invention;
  • FIG. 3 shows an exemplary representation of the hardware used to construct the transmitting appliance of FIG. 2;
  • FIG. 4 shows a schematic view of the functions performed by the receiving appliance in the present invention; and
  • FIG. 5 shows a sample transmission across the overlay network.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows a representative overlay network 10 in accordance with one embodiment. The overlay network includes a number of nodes 100 a-e, which may be geographically dispensed across the transmission region of interest. While FIG. 1 shows the overlay networks located in the United States, the invention is not limited to this embodiment. For example, one or more nodes may be located in other countries. Additionally, there is no upper limit to the number of overlay nodes that may be included in the overlay network.
  • The overlay network serves to create reliable or semi-reliable transmissions across the underlying IP network. This overlay network may be a message-oriented overlay network (MOON), although other types of overlay networks may also be used. In one embodiment, the overlay network may be based on the implementation known as SPINES, available from www.spines.org. In other embodiments, the MOON may be based on Resilient Overlay Network (RON), available from the Massachusetts Institute of Technology. Of course, other overlay network architectures may be used as well. In one embodiment, each overlay node is a general purpose computer, executing the SPINES software and having memory elements and one or more network interfaces. The SPINES software is resident in the memory elements, and is executing by a processing unit in the general purpose computer. The processing unit may be any suitable processor, multi-core processor, or may be a plurality of processors.
  • The purpose of the overlay network is to reduce the time needed to re-transmit dropped packets. Traditionally, the sequence numbers are only monitored by the source and destination of a transmission on the Internet. Thus, if a destination in New York determines that a packet was dropped when receiving a content flow from a source in California, the node in New York must request retransmission of this packet. This entails sending the request through multiple nodes until it reaches the original source. By using an overlay network, this delay can be reduced. In an overlay network, each overlay node tracks sequence numbers and dropped packets. Thus, upon detection of a dropped packet, an overlay node can request retransmission from the overlay node which it received the packet from. Since these overlay nodes are closer together, the time to discover the error and request and receive a retransmission is much reduced.
  • FIG. 2 shows a representative transmitting appliance 200 which may be used in accordance with one embodiment. The appliance 200 may be in communication with a video encoder 250 via an encoder interface 210. The encoder 250 encodes baseband video into an MPEG format, such as but not limited to MPEG-TS. This content flow is then divided into packets, which are sent to the transmitting appliance 200. In one embodiment, the MPEG-TS packets are transmitted via UDP. In other embodiments, a different network protocol is used to transmit the packets. Although the encoder 250 and the transmitting appliance 200 are shown as separate components, in some embodiments, these two elements may exist within one physical component.
  • The transmitting appliance 200 receives packets from the encoder 250 via the encoder interface 210. In the coding block 220, a predetermined number of redundant packets are created. This coding block 220 receives a group of packets, where the size of the group is predetermined, but programmable. It then uses this group of packets to create a set of redundant packets, where the number of redundant packets is selectable by the user or by the transmitting appliance 200. For example, the coding block 220 may accept ten packets, and create five redundant packets based on the information in these ten packets. The purpose of this redundancy is to create a mechanism by which the original ten packets can be reconstituted, even if those original ten packets do not reach the destination. For example, assume that the fourth of the ten original packets did not reach the destination, but the five redundant packets arrived. Using the nine original packets and the five redundant packets, it is possible to reconstruct the missing fourth packet. The number of packets in a group and the number of redundant packets are implementation decisions, and their choices are not limited by the present disclosure. Additional redundant packets provide better insurance against lost packets, since the destination is better able to reconstruct packets when more redundant information is available. Of course, this added number of redundant packets requires additional bandwidth for the transmission. Thus, if five redundant packets are created for every group of ten original packets, the content flow will require 50% more bandwidth than the original content flow.
  • In one embodiment, the coding block uses Forward Erasure Correction (FEC) algorithms to create the redundant packets. One such embodiment may be found at www.openFEC.org, although other embodiments may also be used. In one particular embodiment, the LDPC Staircase codec may be utilized.
  • The resulting set of packets (where a set is defined as the original group plus the redundant packets) is then moved to the packet formation block 230 for further processing. The packet formation block 230 generates a header for each packet in the set. The header may contain various information. For example, in one embodiment, the header includes packet sequencing information, transmission identification information, and the FEC ratio (i.e. the ratio of the redundant packets to the group size). The header is appended to each packet.
  • These packets are then transmitted over the overlay network using the transmission block 240. The packet formation block 230 contacts the transmission block 240 using a standard API. In one embodiment, the transmission block 240 executes the SPINES software and the API provided by the SPINES software. The transmission block 240 then forwards the packets across the overlay network.
  • The transmitting appliance 200 is an additional node in the overlay network shown in FIG. 1. In other words, any transmitting appliance 200 (of which there may be many, depending on the number of remote content generation sites) is an end node for the overlay network of FIG. 1.
  • FIG. 2 shows the transmitting appliance 200 subdivided into four blocks 210-240. This division is for illustration purposes only, and the invention is not required to partition the functions performed by the transmitting appliance 200 in this way.
  • As shown in FIG. 3, the transmitting appliance 200 may be a general purpose computing element, having a processing unit 201, associated memory elements 202 and one or more Network Interface Cards (NICs) 203. The associated memory 202 may be in the form of semiconductor memory, magnetic memory (such as disk drives) or optical memory (such as CDROMSO. The semiconductor memory may be volatile, such as RAM, non-volatile, such as ROM, or a re-writable non-volatile memory, such as FLASH EPROM. The instructions needed to perform the functions described above are stored in the memory elements associated with the processing unit 201. The processing unit 201 may be a single processor, a multi-core processor, or may be a plurality of processors. The transmitting appliance 200 may also include other hardware, such as dedicated hardware to perform the coding (such as FEC) algorithm, or DMA (direct memory access) machines to facilitate movement of data through the appliance 200. Of course, the coding algorithm can also be performed using a software implementation running on the processing unit 201. In one embodiment, data is received from the encoder 250 via NIC 203, processed in the memory 202 by the processing unit 201, and passed out to the overlay network via the NIC 203. Of course, the transmitting appliance 200 may be constructed in a variety of different ways. For example, the transmitting appliance 200 may be a special purpose device, constructed specifically for this task. It may also be a traditional PC (personal computer), running the software needed to perform these functions. The appliance may also be integrated into third party encoding or decoding equipment to allow for compatibility with the overlay network.
  • As stated above, the transmission block 240 transmits the packets to the overlay network shown in FIG. 1.
  • At the destination, a receiving appliance is disposed. The receiving appliance is a node which is part of the overlay network shown in FIG. 1. A representative functional diagram of a receiving appliance 300 is shown in FIG. 4. The data from the overlay network enters the receiving appliance 300 and is received by the input block 310. This input block 310 may execute the same software found in the transmission block 240 of the transmitting appliance 200. In one embodiment, this software is the SPINES or RON software described above.
  • The input block 310 passes the received packets to the header deconstruction block 320, which serves to read the header of every packet to determine the transmission ID, the FEC ratio and current sequence number of the incoming stream. The header deconstruction block 320 uses this to detect changes in the stream in the event of stop/start of the transmitter, and is therefore also able to handle changes in FEC ratio. The header deconstruction block 320 waits to receive a set of packets (defined as a group of original packets and the redundant packets), or until a sufficient time has elapsed such that any missing packets are assumed to be lost.
  • The set of packets is then passed to the coding block 330, which accepts the set of packets (or the received packets from a set). The coding block 330 may then reconstruct any missing or corrupted original packets, using the information available in the set of packets. If all original packets were properly received, the coding block 330 can simply discard the redundant packets. However, in all other cases, the redundant packets are used to reconstruct the missing original packets. The packets, which define the original group transmitted by video encoder 250 to the transmitting appliance 200, are then forwarded to the output block 340. The output block 340 forwards the group of packets to a video decoder 350. This transmission may be using UDP or some other network protocol.
  • As described above, the video decoder 250 may be part of the same physical component as the receiving appliance 300, or may be a separate device. Furthermore, the underlying architecture of the receiving appliance 300 may be similar to that of the transmitting appliance 200, as shown in FIG. 3.
  • The following describes one method in which the above system may operate. Referring to FIG. 5, assume that a content provider 400 is located in Oregon and the destination site 401 is in Massachusetts. A transmitting appliance is located at the content provider 400 in Oregon. This transmitting appliance receives a video stream from a video encoder, as described above. The transmitting appliance is a node on the overlay network and makes the video feed available using a protocol such as multicast.
  • At the destination 401, a user determines that they desire to view the video stream being sourced at the content provider 400 in Oregon. The receiving appliance, located at the destination 401, requests that multicast group from its localhost daemon. That daemon then determines the best route to the source, using the overlay nodes. In this example, the daemon selects a route that utilizes overlay nodes 100 e, 100 b and 100 a.
  • The transmitting appliance transmits packets over the overlay network, after performing all of the steps and functions described above. Thus, when the content stream exits the content provider 400 and is transmitted to overlay node 100 a, the individual packets each have headers, and the redundant packets have been created and are transmitted with the original group of packets. This stream reaches overlay node 100 a, which checks to make sure that all packets arrive. This may be done by checking sequence numbers or some other similar mechanism. Any packets that do not arrive are requested again by overlay node 100 a. The overlay node may have sufficient buffering so that it can receive packets out of order and keep track of which packets are missing. In this way, unnecessary retransmissions are minimized. The overlay node 100 a does not provide any of the coding functions described above. Rather, the overlay node 100 a executed the software needed to create the overlay network and forwards the packets to the next overlay node 100 b. The overlay node 100 b performs the same checks to insure that all of the packets have been received and forwards the packets (including the original group of packets and the redundant packets) to overlay node 100 e. Overlay node 100 e performs the same functions as the other overlay nodes 100 a, 100 b and forwards the packets to the destination 401. The receiving appliance located at the destination 401 is also part of the overlay network, so it is able to request retransmissions if it determines that one or more packets were lost.
  • The receiving appliance then performs the functions described above, where it strips off the headers, reorders the packets, and reconstructs any lost original packets. The original group of packets is then delivered, such as by using UDP, to the video decoder.
  • This configuration provides the reliability improvement inherent in an overlay network. This improvement is further augmented by using FEC (or some other forward erasure or error coding) to further protect against dropped or lost packets, thus offering a robust, low cost mechanism to transmit time sensitive information, such as a video stream across the internet.
  • In one particular embodiment, this Internet-based video delivery system can be combined with the remote controlled studio camera system, as disclosed in copending U.S. Patent Publication 2012/0212609. For example, the first subsystem described in that application may include the transmitting appliance described herein. The other subsystems that receive a video stream from the first subsystem, such as but not limited to the fourth subsystem, may include a receiving appliance as described herein.
  • The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes.

Claims (8)

What is claimed is:
1. A system for delivering time-sensitive content across the Internet, comprising:
an overlay network, comprising one or more overlay nodes;
a transmitting appliance, comprising:
a packet formation block to create original packets from an incoming content stream;
a coding block to create redundant packets which allow reconstruction of original packets in the event of a packet loss, said original packets and said redundant packets defining a set of packets;
a transmission block to transmit said set of packets to a first of said overlay nodes; and
a receiving appliance, comprising:
an input block to receive packets from one of said overlay nodes in said overlay network;
a packet deconstruction block to properly sequence said received packets;
a coding block to reconstruct said original packets from said received packets in case of a loss of one or more packets in said transmitted set of packets; and
an output block which transmits said reconstructed original packets to a decoder.
2. The system of claim 1, wherein said coding block in said transmitting appliance and said coding block in said receiving appliance utilize forward erasure correction.
3. A method of transmitting content over the internet between a content provider and a destination, comprising:
receiving content from said content provider, where said content is divided into a set of original packets;
creating redundant packets based on said original packets;
transmitting said original packets and said redundant packets over an overlay network;
receiving packets transmitted over said overlay network from said content provider;
reconstructing said original packets based on said received packets; and
delivering said reconstructed packets to said destination.
4. The method of claim 3, wherein said redundant packets are constructed using forward erasure correction.
5. The method of claim 3, wherein said overlay network comprises at least one intermediate node disposed between said content provider and said destination.
6. The method of claim 5, wherein said transmitted packets are sent to said intermediate node, and said intermediate node forwards said packets to said destination.
7. The method of claim 6, wherein said intermediate node does not perform said reconstructing step.
8. The method of claim 6, wherein said intermediate node requests retransmission of any missing packets.
US13/796,053 2012-11-19 2013-03-12 Internet-Based Video Delivery System Abandoned US20140143820A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US13/796,053 US20140143820A1 (en) 2012-11-19 2013-03-12 Internet-Based Video Delivery System
PCT/US2013/044471 WO2014077899A1 (en) 2012-11-19 2013-06-06 Internet-based video delivery system
US14/831,084 US9661373B2 (en) 2012-11-19 2015-08-20 Internet-based video delivery system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261728012P 2012-11-19 2012-11-19
US13/796,053 US20140143820A1 (en) 2012-11-19 2013-03-12 Internet-Based Video Delivery System

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/831,084 Continuation US9661373B2 (en) 2012-11-19 2015-08-20 Internet-based video delivery system

Publications (1)

Publication Number Publication Date
US20140143820A1 true US20140143820A1 (en) 2014-05-22

Family

ID=50729239

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/796,053 Abandoned US20140143820A1 (en) 2012-11-19 2013-03-12 Internet-Based Video Delivery System
US14/831,084 Active US9661373B2 (en) 2012-11-19 2015-08-20 Internet-based video delivery system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/831,084 Active US9661373B2 (en) 2012-11-19 2015-08-20 Internet-based video delivery system

Country Status (2)

Country Link
US (2) US20140143820A1 (en)
WO (1) WO2014077899A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9661373B2 (en) 2012-11-19 2017-05-23 Videolink Llc Internet-based video delivery system

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9019372B2 (en) 2011-02-18 2015-04-28 Videolink Llc Remote controlled studio camera system
US9992101B2 (en) 2014-11-24 2018-06-05 Taric Mirza Parallel multipath routing architecture
WO2017184807A1 (en) * 2016-04-20 2017-10-26 Taric Mirza Parallel multipath routing architecture
US10361817B2 (en) * 2017-01-20 2019-07-23 Dolby Laboratories Licensing Corporation Systems and methods to optimize partitioning of a data segment into data packets for channel encoding

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165830A1 (en) * 2008-12-22 2010-07-01 LiveTimeNet, Inc. System and method for recovery of packets in overlay networks

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020097322A1 (en) 2000-11-29 2002-07-25 Monroe David A. Multiple video display configurations and remote control of multiple video signals transmitted to a monitoring station over a network
US6564380B1 (en) 1999-01-26 2003-05-13 Pixelworld Networks, Inc. System and method for sending live video on the internet
US6774926B1 (en) * 1999-09-03 2004-08-10 United Video Properties, Inc. Personal television channel system
US6981045B1 (en) * 1999-10-01 2005-12-27 Vidiator Enterprises Inc. System for redirecting requests for data to servers having sufficient processing power to transcast streams of data in a desired format
WO2001056266A2 (en) 2000-01-28 2001-08-02 Ibeam Broadcasting Corporation Method and apparatus for encoder-based distribution of live video and other streaming content
EP1331808B8 (en) 2002-01-16 2014-10-15 Thomson Licensing Production system, control area for a production system and image capturing system for a production system
US7933945B2 (en) 2002-06-27 2011-04-26 Openpeak Inc. Method, system, and computer program product for managing controlled residential or non-residential environments
JP3919632B2 (en) 2002-08-14 2007-05-30 キヤノン株式会社 Camera server device and image transmission method of camera server device
JP4243140B2 (en) 2003-06-11 2009-03-25 日本放送協会 Data transmitting apparatus, data transmitting program and data receiving apparatus, data receiving program and data transmitting / receiving method
US8209400B2 (en) 2005-03-16 2012-06-26 Icontrol Networks, Inc. System for data routing in networks
US7355975B2 (en) * 2004-04-30 2008-04-08 International Business Machines Corporation Method and apparatus for group communication with end-to-end reliability
GB2420044B (en) 2004-11-03 2009-04-01 Pedagog Ltd Viewing system
WO2006072994A1 (en) 2005-01-07 2006-07-13 Systemk Corporation Login-to-network-camera authentication system
US7584433B2 (en) 2005-04-20 2009-09-01 Avp Ip Holding Co., Llc. Extendible and open camera connector system
JP2008118271A (en) 2006-11-01 2008-05-22 Fujinon Corp Remote control system of imaging apparatus
US7839798B2 (en) * 2007-06-22 2010-11-23 Microsoft Corporation Seamlessly switching overlay network relay trees
US8477177B2 (en) 2007-08-10 2013-07-02 Hewlett-Packard Development Company, L.P. Video conference system and method
JP5056359B2 (en) 2007-11-02 2012-10-24 ソニー株式会社 Information display device, information display method, and imaging device
US8619775B2 (en) 2008-07-21 2013-12-31 Ltn Global Communications, Inc. Scalable flow transport and delivery network and associated methods and systems
KR20110042331A (en) 2008-07-28 2011-04-26 톰슨 라이센싱 A method and apparatus for fast channel change using a secondary channel video stream
US8181210B2 (en) 2008-08-07 2012-05-15 LiveTimeNet, Inc. Method for delivery of deadline-driven content flows over a flow transport system that interfaces with a flow delivery system via a selected gateway
US20110119716A1 (en) 2009-03-12 2011-05-19 Mist Technology Holdings, Inc. System and Method for Video Distribution Management with Mobile Services
US8599851B2 (en) 2009-04-03 2013-12-03 Ltn Global Communications, Inc. System and method that routes flows via multicast flow transport for groups
US9019372B2 (en) 2011-02-18 2015-04-28 Videolink Llc Remote controlled studio camera system
US20140143820A1 (en) 2012-11-19 2014-05-22 Videolink Llc Internet-Based Video Delivery System

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100165830A1 (en) * 2008-12-22 2010-07-01 LiveTimeNet, Inc. System and method for recovery of packets in overlay networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9661373B2 (en) 2012-11-19 2017-05-23 Videolink Llc Internet-based video delivery system

Also Published As

Publication number Publication date
US20150358668A1 (en) 2015-12-10
US9661373B2 (en) 2017-05-23
WO2014077899A1 (en) 2014-05-22

Similar Documents

Publication Publication Date Title
US9661373B2 (en) Internet-based video delivery system
US9661209B2 (en) Remote controlled studio camera system
US6795863B1 (en) System, device and method for combining streaming video with e-mail
CN109640029B (en) Method and device for displaying video stream on wall
ES2774203T3 (en) Transport protocol for anticipated content
JP5588527B2 (en) System and method for handling significant packet loss in multi-hop RTP streaming
US20080160911A1 (en) P2P-based broadcast system and method using the same
JP5738865B2 (en) Distribution of MPEG-2TS multiplexed multimedia stream by selecting elementary packets of MPEG-2TS multiplexed multimedia stream
US11206299B2 (en) MPEG-DASH delivery over multicast
US20200068262A1 (en) System and method for sharing content in a live stream and story application
CN111182322A (en) Director control method and device, electronic equipment and storage medium
CN113194278A (en) Conference control method and device and computer readable storage medium
CN109768957B (en) Method and system for processing monitoring data
CN109302384B (en) Data processing method and system
US10893315B2 (en) Content presentation system and content presentation method, and program
CN110661992A (en) Data processing method and device
CN110392275B (en) Sharing method and device for manuscript demonstration and video networking soft terminal
US10205761B2 (en) Forward error correction recovery and reconstruction
CN110475089B (en) Multimedia data processing method and video networking terminal
CN112788050A (en) System and method for realizing low-delay live broadcast based on content distribution network
WO2002078258A2 (en) Method for data broadcasting
CN109819281B (en) Payment method and system based on video network
CN110730321B (en) Video conference method and device based on command car
US20140013373A1 (en) Interactive service system, interactive system and interactive method thereof
CN111951932A (en) Tissue slice video data transmission method and device and computer readable storage medium

Legal Events

Date Code Title Description
AS Assignment

Owner name: VIDEOLINK LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AKIMCHUK, JAMES W., III;WILLIS, LEIGH;REEL/FRAME:030107/0187

Effective date: 20130325

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION