WO2011094346A1 - Codeur, decodeur et transcodeur multi-standard simultanes integres - Google Patents

Codeur, decodeur et transcodeur multi-standard simultanes integres Download PDF

Info

Publication number
WO2011094346A1
WO2011094346A1 PCT/US2011/022624 US2011022624W WO2011094346A1 WO 2011094346 A1 WO2011094346 A1 WO 2011094346A1 US 2011022624 W US2011022624 W US 2011022624W WO 2011094346 A1 WO2011094346 A1 WO 2011094346A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
signal
video
audio
encoding
Prior art date
Application number
PCT/US2011/022624
Other languages
English (en)
Inventor
Barry L. Hobbs
Original Assignee
Hobbs Barry L
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hobbs Barry L filed Critical Hobbs Barry L
Publication of WO2011094346A1 publication Critical patent/WO2011094346A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Definitions

  • the present invention relates generally to video and audio encoding. More particularly, the present invention relates to an integrated concurrent multi- standard encoder, decoder and transcoder for encoding, decoding and transcoding video and audio into multiple standards and/or at multiple data rates.
  • This apparatus will provide the users with capabilities to address concerns with security and information assurance.
  • the apparatus provides unique capabilities for automated reference time clock and positioning information with the ability to add additional metadata.
  • the system capabilities yield an apparatus providing rapid access to critical video in multiple modes and data rates. These qualities, in whole and in part, are required in surveillance, news-gathering, security, and commercial/broadcast video applications.
  • the apparatus described is capable of processing video, audio and data and placing the information on a serial stream of data for transmission over multiple media links.
  • the video processing is commonly referred to as encoding or compressing.
  • the apparatus may be programmed to process a single video signal with multiple audio and data services or concurrently processing dual video programs in multiple standards and modes.
  • the video standards and modes may be at different processing and output data rates.
  • the video may be supplemented with application software and that would allow the following optional features:
  • An optional real time clock that will provide a time code reference to the video.
  • An optional Global Positioning System, GPS capability when enabled with additional application software, may allow the user to have an automated calculation with a camera for the GPS position of the video being processed. The GPS position will be associated with the proper video frames and provided in separate private data packets.
  • An optional Program System Information Protocol capability to provide a minimum of sixteen days of program data information originated from a third party source and embedded in the transport stream.
  • An optional internal capability to generate, store and transmit transport stream Program System Information Protocol packets.
  • An optional Digital Video Broadcasting, DVB program guide generation capability to be generated via a third party through a predetermined interface.
  • the current art form allows items a and b, above, to be integrated on a single processing board today.
  • Items c, d and e, above, are generated through a combination of external hardware and application software today. Items g and h are generally not done in today's art form within the encoding environment, especially on a single board.
  • a concurrent item i is also not implemented on a single board in today's art form.
  • Item j is implemented through additional hardware in today's art form. It is the combination of these capabilities, which provide the unique art form for this design on the integrated multi-encoder apparatus.
  • This apparatus shall also process audio.
  • the audio processes supported may be Dolby AC-3, AC-3 5.1 , Dolby-E, AAC, PCM, Musicam or other audio processes defined and carried in a compliant fashion in the International Standards Organization/ International Electro technical Committee 13818-1 system and/or 13818-3 audio specifications.
  • the audio services can be synchronized with the video or they may be separate and independent audio processes that are independent of any processed video packets.
  • the Figures describe a high level step-by-step view of each of the concurrent processes. It provides examples of the processing elements provided to process the information. The actual processing elements and the number of processing elements will vary due to programmer's preferences and the video and audio standards being processed.
  • Figure 1 is a composite Figure of the functions, processing elements, inputs and outputs of a video encoder ("apparatus”) in accordance with one embodiment of the present invention
  • Figure 2 is a view of the parallel processor built by Coherent Logix of Austin, Texas.
  • Figure 3 is a view of the parallel processor's processing elements, data memory and routing capabilities along and its input and output structures;
  • Figure 4 is a view of the first video input with its signal routing and signal processing elements
  • Figure 5 is a view of the first section of audio inputs, up to eight, with their signal routing and signal processing elements;
  • Figure 6 is a view of the second section of audio inputs, up to four, with their signal routing and signal processing elements;
  • Figure 7 is a view of the network time reference and real time clock to derive the
  • Figure 8 is a view of the optional global positioning system inputs, signal routing and signal processing elements
  • Figure 9 is a view of the optional software application program for object recognition, and its signal routing and signal processing elements
  • Figure 10 is a view of the optional stream analysis capability for a resident software application program. This depicts the signal routing of this information;
  • Figure 11 depicts the Digital Video System, DVB, system information input, routing and processing elements
  • Figure 12 depicts the Internal Program Specific Information Protocol routing and processing element for an internal software applications program resident on the processing board
  • Figure 13 depicts the External Program Specific Information Protocol routing and processing element for an external input formatted for the processing board
  • Figure 14 depicts the command interface to the processing board
  • Figure 15 depicts the control function for the processor board
  • Figure 16 depicts the input, routing and processing element utilization for embedding conditional access information for network receiver control
  • Figure 17 depicts the second video input, routing and processing element utilization
  • Figure 18 depicts a third set of high data rate audio inputs, up to eight inputs, their routing and processing element utilization;
  • Figure 19 depicts the first Asynchronous Serial Output, processing element utilization and output routing
  • Figure 20 depicts the second Asynchronous Serial Output, processing element utilization and output routing
  • Figure 21 depicts the third Asynchronous Serial Output, processing element utilization and output routing
  • Figure 22 depicts the first Ethernet output, its processing elements and routing
  • Figure 23 depicts the second Ethernet output, its processing elements and routing
  • Figure 24 depicts a serial data output for an external monitor for the processed video and audio. This provides a view of the processing elements and routing;
  • Figure 25 depicts a high-resolution memory component, routing and associated processing elements
  • Figure 26 depicts an external ASI input, its routing, and associated processing elements
  • Figure 27 depicts the processor's boot function components
  • Figure 28 depicts the RS-232 input for closed captioning, its routing and associated processing elements
  • Figure 29 depicts the composite video input for synchronization to a studio input, its' routing and associated processing elements
  • Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization;
  • Figure 31 is a Figure of the 10 X 10 parallel processor and the ability to boot the system for a decoder in accordance with one embodiment of the present invention
  • Figure 32 is a view of the first RF input and routing
  • Figure 33 is a view of the addition of a second RF input channel and its routing
  • Figure 34 is a view of the addition of the first ASI input and its routing
  • Figure 35 is a view of the addition of the second ASI input and its routing
  • Figure 36 is a view of the addition of a first Ethernet input
  • Figure 37 is a view of the addition of a second Ethernet input
  • Figure 38 is a view of the addition of an input for a mobile transmission system
  • Figure 39 is a view of the addition of a signal level reference input for the mobile system
  • Figure 40 depicts the meta-data tagging and retrieval capability through an additional
  • Figure 41 depicts the ETR 290 analysis application
  • Figure 42 depicts the command channel for the apparatus
  • Figure 43 depicts the control channel for the apparatus
  • Figure 44 depicts the addition of a first ASI output
  • Figure 45 depicts the addition of a second ASI output
  • Figure 46 depicts the addition of a third ASI output
  • Figure 47 depicts the addition of a first Ethernet output
  • Figure 48 depicts the addition of a second Ethernet output
  • Figure 49 depicts the addition of a first digital SMPTE 292M or SMPTE 259M output
  • Figure 50 depicts the addition of a second digital SMPTE 292M or SMPTE 259M output
  • Figure 51 depicts the addition of a first output to support an analog high definition interface
  • Figure 52 depicts the addition of a second output to support an analog high definition interface.
  • Figure 53 depicts the ability to provide a first multi-channel audio output capability
  • Figure 54 depicts the ability to provide a second multi-channel audio output capability
  • Figure 55 depicts the ability to provide a Program Specific Information Protocol
  • Figure 56 depicts the first video and associated channel decoding and access to video memory
  • Figure 57 depicts the second video and associated channel decoding and access to video memory
  • Figure 58 depicts the ability to scale one of the two video decoding applications based on a received RF signal strength.
  • Figure 1 depicts a high level composite picture of the inputs and outputs with the associated signal routing and processing element utilization.
  • the Figure also provides a view of the critical processor communications structure through complimentary Low Voltage Differential Signaling (LVDS).
  • the encoder is contained on a single printed circuit board. At the center of the board are two massive parallel processors each containing multiple processing elements. These processors contain 968Kbytes of data memory and 400Kbytes of instruction memory.
  • the DMRs provide data memory, control logic, registers and routers for fast routing services.
  • This structure provides the real-time programmable and adaptable communications fabric to support arbitrary network topologies and sophisticated algorithm implementations. c.) There are twenty-four (24) Input/ Output blocks to connect the periphery to DMRs. This structure also supports sustainable on-chip communications to other Hx3100 processors and allows the preservation of a consistent programming model.
  • the input/output structure also enables interfacing to other memory, processor buses, analog-to-digital converters, sensors and displays. There are, in addition, 24 user configurable timers, one associated with each Input/Output element.
  • a. Two programmable input ports for either analog video or digital video; b. ) Three programmable input ports for analog audio or digital audio; c. ) Two programmable Ethernet ports that will support either Internet Protocol version 4 or Internet Protocol version 6; d. ) A RS-232 interface with an analog to digital converter to support closed captioning for the hearing impaired feature; e. ) An analog input for a composite synchronization signal from a studio reference.
  • Ethernet output ports There are two Ethernet output ports that will support either IPv4 or IPv6 formats. These ports will be void of command, control and other extraneous information. They are reserved for processed video, audio and data services.
  • FIG. 2 depicts an internal view of the type of massive parallel processor that sits at the core of this apparatus.
  • the internal structure contains one- hundred (100) processing elements, (“PEs"), configured in a ten-by-ten (10X10) physical array.
  • PEs processing elements
  • DMR data memory and routing
  • Each processing element can be configured to perform a unique mathematical function on a cycle-by-cycle basis.
  • the DMR and the PE structure allows the processing elements to be efficiently configured for multiple functions and multiple program executions on a concurrent time basis.
  • a massive parallel processor of the type described in the previous paragraph provides the capability to utilize the low voltage communications, LVDS, ports extending the processing capabilities across multiple massive parallel processors.
  • the multiple processors can be unified under the control of one instruction set, either on board or through an external electrically erasable programmable read-only-memory component on the same board.
  • the Figure also provides a view of the interface ports for the boot information as well as multiple CMOS, DDR and LVDS interface capabilities.
  • Figure 3 provides an understanding of the data memory and routing capability and the relationship to the processing elements of the Hx3100 or similar type massive parallel processor. In the following Figures, not all of the DMR elements and address routing for each function are shown. It is important to understand this particular Figure and ability to move across the fabric of the processor in an efficient manner.
  • FIG. 3 A snapshot view of four (4)of the one hundred (lOO)processing elements, nine (9) data memory and routing elements as well as nine input/output routers of one Coherent Logix Hx3100 component is also illustrated in Figure 3. This depicts the multiple address capabilities allowing data to be routed and stored from as many as eight (8) sources into DMR (1, 1). The efficiency in routing data through a multi-dimensional implementation provides the programmer capabilities to write software algorithms that are more efficient in processing by reducing the number of cycles required, and reducing power consumption.
  • Figure 4 depicts an example of how the first programmable video input for the apparatus may be routed.
  • the analog or digital input signal flow is from the input port ( Figure 1A) to either the analog to digital converter ( Figure IB) or to the DDR memory ( Figure 1C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router ( Figure ID) of the processor. The signal is then routed to the data memory and routing element, DMR, ( Figure IE) of the processor. From the DMR, the signal is routed to processing elements (0,0) through (0, 19) plus (1 ,0) through (1 , 19), plus (5,0) through (5,9) and (6,0) through (6,9).
  • the input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
  • the signal When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
  • an analog to digital converter component located on the board and is identified as Figure IB.
  • a digital input When a digital input is utilized it can be routed through a double data rate synchronous dynamic random access component, DDR SDRAM, such as an EOREX EM44CM1688LBA as shown in Figure 1C.
  • DDR SDRAM double data rate synchronous dynamic random access component
  • the EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan.
  • the input signal is then clocked into the massive parallel processor's input/output data port.
  • the signal is routed to the associated DMR on the processor. From the DMR a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
  • -13818-2 for the video format for MPEG-2, H.264 for advanced video coding
  • -13818-3 for audio coding of Musicam or formatting for packaging of alternate audio systems such as Dolby or Advanced Audio Coding
  • Figure 5 depicts a programmable audio input port represented as Port 2 ( Figure 2A).
  • the input port is user-configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to eight, for systems such as Dolby "E”.
  • the input audio signal flows from the input port(s) ( Figure 2A) to either the analog to digital converter ( Figure 2B) or directly into the input/output port of the parallel processor ( Figure 2C) if it is already formatted as a digital signal.
  • the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port. In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
  • the DMR routes the signal to processing elements (2,0) through (2,3).
  • the processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.)
  • the processing elements support audio systems including Musicam, Dolby AC-3, Dolby AC-3 5.1, Dolby E, Pulse coded modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
  • Figure 6 depicts an alternative audio input on Port 3 ( Figure 3A) for additional audio services.
  • the signal flow is from an audio source ( Figure 3 A) to either the analog to digital converter ( Figure 3B) or directly to the input/output port of the processor ( Figure 3C). From the output of the analog to digital converter ( Figure3B), if utilized, the signal is sent to the input/output router of the processor ( Figure 3C). From the input/output router the signal flows to the data memory and router ( Figure 3D) and on to processing elements (2,4) and (2,5).
  • the input port is user configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to four, depending on the audio sources chosen by the end user. If the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal.
  • the processing elements support audio systems that will include Musicam, Dolby AC-3, Dolby AC-3 5.1 , Dolby E, Pulse Coded Modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a mater clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the parallel processor's processing elements provide the synchronized video to the proper output ports as defined by the programmer and/or end user, if this audio is program related. This audio may be independent of the video.
  • Figure 7 depicts the Network Reference Clock (Figure 4A).
  • the signal flow is from a network provided clock ( Figure 4 A) to a real time clock ( Figure 4B). From the output of the real time clock ( Figure 4B) the signal is sent to an input/output port of the processor ( Figure 4C). From the input/output port the signal flows to a data memory and routing element (4D) and onto processing element (2,6).
  • the network reference clock synchronizes time with specific networks and coordinates with broadcast clocks.
  • the real time clock on the board maintains synchronization when a network clock is not available.
  • Thee real time clock produces a reference for the on-board software application within the massive parallel processor to produce a Universal Time Code. This information marks contiguous video frames with metadata, which can then be used for future reference in video asset management systems.
  • Figure 8 depicts the input from a global positioning sensor (Figure 5A). From the sensor the signal flows to the on-board telemetry (Figure 5B). The GPS and telemetry information pass to the input port of the massive parallel processor ( Figure 5C). The processor will host a software application in its memory to process the GPS information with information provided by a cameras target position and within the telemetry information. The information passes from the data memory and routing element ( Figure 5D) to the processing elements as defined by the software application. In this case an example is provided using processing elements (2,7) and (2,8). The processing elements will calculate the observation point of the camera with the GPS software application. This information is formatted and synchronized with the video in the processing elements (2,7) and (2,8). The information is provided in the formatted output of the transmitted stream for video asset management systems.
  • GPS systems such as the Raytheon Anti-Jam Receiver are utilized on flight systems today. They interface at a 1394B specification level that is easily supported through the processors input/output router configured for this digital input format.
  • Figure 9 depicts the object recognition input(Figure 6A).
  • the input sensor is typically a "smart camera” provided by vendors such as Pittsburgh Pattern or Cogent Systems that provides information formatted and compliant to the ISO- 19794-5 standard.
  • the Information is input through the sensor ( Figure 6A) to Ethernet input ports, either ( Figure 61) or ( Figure 6M).
  • Figures 61 and 6M are Ethernet input ports. These ports act independently of each other. They can be configured to either Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6). Each Ethernet port is independently configurable.
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • Each Ethernet port is independently configurable.
  • the input from Figure 61 is routed to the input/output port of the massive parallel processor ( Figure 6J).
  • the input and output element (Figure 6J) routes the signal to the DMR ( Figure 6K)and then to the processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6L.
  • the alternative route and object recognition software application storage is through the input from Figure 6M routed to the input/output port of the massive parallel processor ( Figure 6N).
  • the input and output element ( Figure 6N) routes the signal to the DMR ( Figure 60) and onto processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6P.
  • FIG 9 the sensor input is depicted as Figure 6A.
  • the smart camera information is provided through the Ethernet Port ( Figure 61) to the parallel processor's input port (Figure 6J) and routed through the data memory and routing element ( Figure 6K).
  • the information may require access to the program located in Figure 6L or be immediately processed by processing elements (2,9) through (2, 11).
  • This information can be sent in user data packets to the end users in the transport stream following ISO/IEC Standard 13818-1. This information can be coordinated with video frame timing information as described in ISO/IEC Standard 13818-2.
  • Some of the applications of object recognition are facial recognition, scene change detection, license plate recognition and geo-spatial location recognition.
  • Figure 10 depicts the ETR-290 software application. This is a European
  • the program resides on the processing board on non-volatile double data rate memory.
  • the program provides a web services interface for the user also stored in memory on the board ( Figure 6L).
  • the routing originates in DDR memory ( Figure 6L) and monitors one of the three ASI outputs or two Ethernet output ports across the fabric of the massive parallel processor. Specifically, the way we depict this application is to monitor read the output streams of processing elements providing the output streams to Figures 9C, IOC, 11C, 12C andl3C . We can also monitor the external ASI stream in Figure 16C.
  • the web services interface provides compliance information for ISO/IEC 13818-1 or
  • Figure 11 depicts the DVB Standard system information being inserted into the program streams for non-ATSC PS IP applications.
  • the required tables of the standard for the transport stream are integrated and maintained in either DDR memory or memory on the massive parallel processor.
  • the information flows from the DDR memory ( Figure 6L) to the input/output element of the processor ( Figure 6J) through the data memory and routing element ( Figure 6K) and to a defined processing element on the processor.
  • processing element (2, 14) has been chosen.
  • the DVB program guide utilizes the DDR memory on the processing board.
  • This program guide information is provided through a system management external computer via the Ethernet Port ( Figure 6L).
  • This information is non-real time information and is downloaded to memory on an ad-hoc basis. This process occurs once every one to two weeks.
  • the processing signal flow for the DVB program guide information would be from the DDR memory ( Figure 6L) to the input/output router of the processor ( Figure 6J) through a DMR ( Figure 6K) to the processing element (2, 14).
  • FIG. 12 depicts the Internal Program System Information Protocol (PS IP) software application. This is a software application that resides on memory on the processor board on non- volatile memory. In the Figure the capacity for this software program resides in the DDR memory ( Figure 6P). The data in the PSIP is populated by the end user through a web based interface.
  • PS IP Program System Information Protocol
  • the signal flows as follows.
  • the program resides in memory, Figure 6P. It is accessed on a continual basis by processing element (2, 15).
  • the information flows from the software application stored in DDR memory ( Figure 6P) through the I/O router of the massive parallel processor ( Figure 6N) to the data memory router( Figure 60). The information is then placed within the stream by processing element (2, 15).
  • Figure 13 depicts an external Program System Information Protocol program that resides on hardware external to the processor board. This program enters through either Ethernet Port ( Figures 61) or ( Figure 6M). The information is routed through the appropriate I/O elements on the massive parallel processor, either Figure 6J or 6N. From the I/O element the signal is routed to the appropriate data memory and routing element ( Figure 6K) or ( Figure 60). The information is then processed by processing element (2, 16) and placed onto the output stream in conformance with the A/65 specification.
  • Figure 14 depicts Command information for the video/audio and data encoder processor board.
  • the information control is stored in memory ( Figure 6P) and accessed on web pages resident stored in non-volatile memory ( Figure 6P) on the processor board.
  • the program is accessed through the Ethernet Port ( Figure 6M) through the processor's I/O port ( Figure 6N) and routed to the data memory and routing element ( Figure 60).
  • the information flows back through the I/O port ( Figure 6N) to the DDR memory ( Figure 6P). All changes are implemented through the route of the DDR ( Figure 6P) through the I/O port (Figure 6N) to the data memory router ( Figure 60) to the processing element (2, 17).
  • the command sets system parameters. These parameters include:
  • Frame rates including 24fps, 25fps, 29.97fps, 30fps, 50fps, 59.94fps and 60fps for the appropriate resolutions;
  • Audio data rates per channel or system such as 384Kb for Dolby AC-3 5.1. up to 640Kbs for non-Dolby "E” .
  • Dolby E rates 1.536 Mbs./sec at 16 bits, 1.920 Mbs./sec. for 20 bits or 2.304 Mbs./sec. for 24 bits sampled;
  • Audio sampling rate 16 bits, 20 bit, 24 bit for Dolby "E” , 32kHz, 44.1 kHz or 48 kHz;
  • ASI port configuration either byte or burst mode
  • Ethernet input port configuration either IPv4 or IPv6 for each port individually;
  • Ethernet output port configuration either IPv4 or Ipv6 for each port individually;
  • Figure 15 depicts the Control function of the Processor Board.
  • the control parameters and web services interface pages are stored in non- volatile memory in ( Figure 6P).
  • the access is requested through the Ethernet Port ( Figure 6M).
  • the request is routed through the input/output router of the processor ( Figure 6N) to the DDR non-volatile memory ( Figure 6P) and /or the Boot section of the processor, or external EE PROM holding boot information through the data and memory routing element ( Figure 60).
  • the control function appears on web pages to the end user.
  • the functions addressed in the control include, but are not limited to:
  • Figure 16 depicts the Conditional Access "CA" information from an external subscriber gement system. There are two separate pieces of conditional access information.
  • the first piece of conditional access information is related to the program(s) being processed on the processor board by the massive parallel processor. This information is an indication of whether the program utilizes conditional access or if it does not use conditional access information. This information is provided and stored for each program on the massive parallel processor.
  • the second piece of the conditional access information is also originated in an external subscriber management computer and transmitted to the reception devices in the network. This tells the reception device if it has access to the program. If there is conditional access information being transmitted to the receivers in the network it may be entered through the Ethernet port ( Figure M). The routing of this information is from the external computer through the Ethernet Port (Figure 6M) to the I/O router of the massive parallel processor ( Figure 6N) and onto the data memory and routing element ( Figure 60). From the DMR the information is routed to processing element, in this example, (2, 19). The processing element will route the CA information to the proper program DMR and place it on the transport stream for transmission. The conditional access information for the network reception devices is not stored on the processor board.
  • This "CA” information may also be entered downstream of this apparatus as an alternative in external multiplexers. This is a common practice in many networks.
  • the "CA” information is in the "CA” section of the 13818-1 transport stream that is set for each program elementary stream for video, audio and data service.
  • Figure 17 depicts an example of how the second programmable video input for the apparatus may be routed.
  • the analog or digital input signal flow is from the input port ( Figure 7A) to either the analog to digital converter (Figure 7B) or to the DDR memory ( Figure 7C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router ( Figure 7D) of the processor. The signal is then routed to the data memory and routing element, DMR, ( Figure 7E) of the processor. From the DMR, the signal is routed to processing elements (3,0) through (3, 19) plus (4,0) through (4, 19), plus (7,0) through (7,9) and (8,0) through (8,9).
  • the input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
  • the signal When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
  • an analog to digital converter component located on the board and is identified as Figure IB.
  • DDR SDRAM double data rate synchronous dynamic random access component
  • EOREX EM44CM1688LBA synchronous dynamic random access component
  • Figure 1C The EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan.
  • the input signal is then clocked into the massive parallel processor's input/output data port.
  • the signal is routed to the associated DMR on the processor.
  • a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
  • variable length coding i. variable length coding
  • j binary arithmetic coding
  • the ISO/IEC 13818-1 , -2 and -3 also cover the clock synchronization for frame recovery by the receiving devices.
  • Figure 18 depicts a programmable audio input port represented as input port 8 ( Figure
  • the input port is user-configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to eight, for systems such as Dolby "E” .
  • the input audio signal flows from the input port(s) ( Figure 8A) to either the analog to digital converter ( Figure 8B) or directly into the input/output port of the parallel processor ( Figure 8D) if it is already formatted as a digital signal.
  • the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port.
  • the signal In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
  • the DMR on the processor.
  • the DMR routes the signal to processing elements (5, 10) through (5, 13).
  • the processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.)
  • the processing elements support audio systems including Musicam, Dolby AC-3,
  • Dolby AC-3 5.1 Dolby E, Pulse coded modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
  • Figure 19 depicts the first of three Asynchronous Serial Interface "ASI" output ports.
  • FIG. 9C depicts the second of three Asynchronous Serial Interface "ASI" output ports.
  • the port is identical in function for figure 9C in figure 19.
  • Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications.
  • the processing element (5, 15) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications. This output port is void of extraneous information found on the input Ethernet ports.
  • the signal is routed from a processing element (5, 15) to the data memory and router (Figure 10A) onto the parallel processor input/output ( Figure 10B) and then to the ASI output port ( Figure IOC).
  • Figure 21 depicts the third of three Asynchronous Serial Interface "ASI" output ports.
  • the port is identical in function for figures 9C and IOC in Figures 19 and 20.
  • Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications.
  • the processing element (5, 16) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications.
  • This output port is void of extraneous information found on the input Ethernet ports.
  • the signal is routed from a processing element (5, 16) to the data memory and router (Figure 11A) onto the parallel processor input/output (Figure 11B) and then to the ASI output port ( Figure 11C).
  • FIG 22 depicts the first of two Ethernet outputs.
  • the Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6.
  • the ports are configured to carry specific video, audio and data services information formatted as MPEG over IP.
  • the video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 17) and (5, 18).
  • the information from processing elements (5, 17) and (5, 18) passes to the data memory and routing element ( Figure 12A) then onto the parallel processor input/output port element( Figure 12B) and then routed to the output Ethernet Port ( Figure 12C).
  • FIG 23 depicts the second of two Ethernet outputs.
  • the Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6.
  • the ports are configured to carry specific video, audio and data services information formatted as MPEG over IP.
  • video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 19) and (6, 19).
  • Figure 24 depicts a video and audio decoder. This is designed as a confidence decoder that provides decompression of video and audio prior to transmission.
  • the decoder is designed to take the compressed video and audio from one set of processing elements and route the signal to a second set of processing elements. Depending upon the data rate and resolution of the video and audio streams additional or fewer processing elements can be assigned as required.
  • the decoder routes the signal from an assigned series of processing elements to the data memory and routing element ( Figure 14A). From the DMR the signal is then routed to the parallel processor input/output router ( Figure 14B) and out to a serial decoder output ( Figure 14C).
  • the serial decoder output ( Figure 14C) is formatted by processing elements to either the SMPTE 259M or SMPTE 292M specification for viewing on a monitor.
  • Figure 25 depicts a high bit rate storage capability on the processing board.
  • the function supports a feature that allows a capture of high bit rate video/audio that may be accessed at a later time.
  • the processing board provides the ability to concurrently dual process video at a two resolutions at two different data rates. The user has the capability to store either of the video/audio streams on the high data rate storage.
  • the compressed signal is routed from a set of selected processing elements to processing elements (8, 12) and (8, 13). From these processing elements the information is routed to a data memory and router ( Figure 15A) to the parallel processor input/output router ( Figure 15B) and then routed to the storage device ( Figurel5C).
  • Figure 26 depicts the monitoring and/or processing of an external ASI transport stream compliant to ISO/IEC 13818-1 standards.
  • the external input Figure 16C
  • the parallel processor input/output router Figure 16B
  • the stream is then routed to a predefined set of processing elements.
  • the external stream may be:
  • Figure 27 depicts the instruction and boot function of the parallel processing component. This Figure has three components. (1) The Electrical Erasable Program Read Only Memory, EE PROM, (Figure 17A) (2) the Boot Control -SPI ( Figure 17B) and (3) the Serial Bus Controller ( Figure 17C).
  • the EE PROM is a separate component on the processor board and it works in conjunction with the Boot Control and Serial Bus Controller.
  • Figure 28 depicts the RS-232 closed captioning input.
  • the source is an external component with its input to a RS-232 connector ( Figure 18A). This is an analog signal and must be converted to digital through a converter ( Figure 18B).
  • the signal is then routed to a data memory and routing element ( Figure 18C).
  • the signal is then routed to the data memory and router element ( Figure 18D) and then routed to a predefined processing element.
  • the processed signal is timed with the video frames and embedded within the transport stream.
  • Figure 29 depicts the analog composite synchronization signal. This signal is used for frame synchronization in studio applications.
  • the signal is applied to the input port ( Figure 19A).
  • the signal is then routed to an analog to digital converter ( Figure 19B).
  • Figure 19C From the analog to digital converter the signal is routed to the parallel processor input/output element ( Figure 19C) and then to the data memory and router element ( Figure 19D). From the data memory and router the signal is routed to a predefined processing element.
  • Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization.
  • the video will be monitored within processing elements, in this example (0,0) through (0, 19) and (1 ,0) through (1 , 19), for the first video input (figure 1A) for horizontal and vertical movement. If the movement exceeds pre- defined parameters, the video will be pre-processed, in this example, in processing elements (9, 10) through (9, 19) to provide electronic stabilization prior to applying a compression algorithm.
  • This processing can apply to two video compression processes or standards if the images being processed are from one image source for figures 1A and 7A.
  • FIG. 31 - 58 an integrated concurrent multi- standard video/audio decoder and software applications processor is shown and described.
  • the following descriptions provide examples of applications being assigned to specific processing elements within a massive parallel processor.
  • the parallel processor design allows dynamic, cycle-by-cycle, real time programming. In the actual implementation the processing elements may be shared among multiple mathematical processes and functional applications.
  • Figure 31 depicts the processing board with the parallel processor.
  • the processor has multiple parallel processing elements.
  • This Figure addresses the ability to start the processor through a boot process. This process can originate from either stored code on the processor or through external devices such as an electronically erasable programmable-read-only-memory (EEPROM) as this Figure depicts. It is also capable of receiving the instructions from other devices such as RISC controllers.
  • EEPROM electronically erasable programmable-read-only-memory
  • FIG 31 A providing the boot information through the Boot Controller SPI interface internal to the processor (figure 3 IB) to the actual serial bus controller (figure 31C).
  • Figure 32 depicts the first RF input information process.
  • the RF input (figure 32 A) to the board is routed to a demodulator (figure 32B) which removes the signal from a carrier wave.
  • the digital output from the demodulator is routed to the I/O router (figure 32C) of the parallel processor.
  • the I/O router then provides the data to the data memory and routing device, DMR (figure 32D).
  • DMR then routes the information to the processing element, in this example processing element (0,0).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 33 depicts the second RF input information process.
  • the RF input (figure 33A) to the board is routed to a demodulator (figure 33B) which removes the signal from a carrier wave.
  • the digital output from the demodulator is routed to the I/O router (figure 33C) of the parallel processor.
  • the I/O router then provides the data to the data memory and routing device, DMR (figure 33D).
  • DMR then routes the information to the processing element, in this example processing element (0, 1).
  • the processing element parses the packets and routes them to the appropriate processing elements through the DMR structure for further processing.
  • Figure 34 depicts the first Asynchronous Serial Input, ASI (figure 34A).
  • ASI Asynchronous Serial Input
  • This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the ASI input to the I/O router (figure 34B) and then to the data memory and router (figure 34C).
  • the information is routed to the processing element (0,2).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 35 depicts the second Asynchronous Serial Input, ASI (figure 35 A).
  • ASI This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the ASI input to the I/O router (figure 35B) and then to the data memory and router (figure 35C). In this example the information is routed to processing element (0,3).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 36 depicts the first Ethernet input (figure 36A).
  • This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the I/O router (figure 36B) and then to the data memory and router (figure 36C).
  • the information is routed to the processing elements (0,4) and (0,5).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 37 depicts the second Ethernet input (figure 37A).
  • This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed to the I/O router (figure 37B) and then to the data memory and router (figure 37C).
  • the information is routed to the processing elements (0,6) and (0,7).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 38 depicts the mobile communications RF input port (figure 38A). This input is demodulated in (figure 38B) and provides information packets to the input/output router (figure 38C). The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the input/output router (figure 38C) and then to the data memory and router (figure 38D). In this example the information is routed to the processing elements (0,8) and (0,9). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 39 depicts the mobile communications RF reference level input (figure 39A).
  • This input is an analog voltage and is applied to an analog to digital converter (figure 39B).
  • the analog to digital converter provides a digitally sampled reference level to the I/O router, figure 39C.
  • the I/O router provides the signal to the data memory and router element (figure 39D).
  • the DMR information is routed to the processing element (1,0).
  • the processing element provides information to the decoder processing element section for scaling of the video dependent upon the signal strength at any given moment.
  • Figure 40 depicts the meta-data tagging input and output application of the apparatus.
  • the primary function is to allow the input of data to specific frames of video for future reference.
  • the decoder may supply the decoding of the universal time code and geo- positioning system data if it is present in the stream provided to the processing elements. This information can be enhanced with meta-data if required.
  • this application also provides an interface for the object recognition application software.
  • the object recognition application software can provide access for scene change detection, facial recognition and other pre-defined object recognition.
  • the information is routed from the application (figure 40A) to the Ethernet input (figure 40E) and onto the I/O router (figure 40G). From the input/output router the information can be routed either to the DDR memory (figure 40F) for reference information, or to the data memory and router element (figure 40H).
  • FIG 41 depicts the ETR 290 software application.
  • This application allows the user to monitor the content and bandwidth of the program elements within either the input or output streams of this parallel processor.
  • This program will appear as a web service based application.
  • the application is started in (figure 40B).
  • the information in this example is routed from processing element (1 ,2) to the data memory and router element (figure 40H) through the Input/output port (figure 40G) and back through the Ethernet Port (figure 40E) to the end user.
  • Figure 42 depicts the command channel information set-up.
  • the command channel allows the configuration of the decoder.
  • This function is a web based interface starting with the information request in (figure 40C).
  • the information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G).
  • the input/output interface information is routed to the processing element (1 ,3).
  • the information is processed and controls such functions as:
  • Figure 43 depicts the command channel information set-up.
  • the command channel allows or denies user access to the apparatus and to separate operational levels of the apparatus.
  • This function is a web based interface starting with the application request in (figure 40D).
  • the information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G).
  • the input/output interface information is routed to the processing element (1 ,4).
  • the information is processed and controls such functions as:
  • Figure 44 depicts the first Asynchronous Serial Output, ASI (figure 41 A).
  • ASI Asynchronous Serial Output
  • This output carries video and audio in a compressed format on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets may carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from processing element (1 ,5) to the ASI data memory router element (figure 41C). From the DMR the information is routed to the input/output router (figure 4 IB) and then to the ASI output port (figure 41 A).
  • Figure 45 depicts the second Asynchronous Serial Output, ASI (figure 42 A).
  • ASI This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the video and audio are in a compressed format.
  • the information packets may carry multiple standards for video and audio plus other related and non-related data packets.
  • the information is routed from processing element (1 ,6) to the ASI data memory router element (figure 12C). From the DMR the information is routed to the input/output router (figure 42B) and then to the ASI output port (figure 42 A).
  • Figure 46 depicts the third Asynchronous Serial Output, ASI (figure 43A).
  • ASI (figure 43A)
  • This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the video and audio are in a compressed format.
  • the information packets may carry multiple standards for video and audio plus other related and non-related data packets.
  • the information is routed from processing element (1,7) to the ASI data memory router element (figure 43C). From the DMR the information is routed to the input/output router (figure 43B) and then to the ASI output port (figure 43 A).
  • the additional ASI output allows an output to a dedicated storage capability along with allowing redundant ASI outputs in Figures 44 and 45 to provide system level fail safe capability.
  • Figure 47 depicts the first Ethernet Output (figure 44 A).
  • This output port carries information on an Internet Protocol version 4 or 6.
  • the user may select the format.
  • the information may be formatted in MPEG over IP configurations.
  • the information packets may carry multiple standards for video and audio in addition to various other data packets.
  • the information is routed from processing elements (l ,8)and (1 ,9) to the data memory router element (figure 44C). From the DMR the information is routed to the I/O router (figure 44B) and then to the Ethernet output port (figure 44 A).
  • Figure 48 depicts the second Ethernet Output (figure 45 A).
  • This output carries information on an Internet Protocol version 4 or 6.
  • the user may select the format.
  • the information may be formatted in MPEG over IP configurations.
  • the information packets may carry multiple standards for video and audio in addition to various other data packets.
  • the information is routed from processing elements (2,0)and (2, 1) to the data memory router element (figure 45C). From the DMR the information is routed to the I/O (figure 45B) and then to the Ethernet output port (figure 45 A).
  • Figure 49 depicts the first of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information.
  • the information is formatted in processing elements (2,2), (2,3) and (2,4). After formatting, the information is then routed to the data memory routing element (figure 46C). From the DMR the information is passed to the input/output router (figure 46B) and then supplied to the SMPTE-292M output element (figure 46A).
  • This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
  • Figure 50 depicts the second of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information.
  • the information is formatted in processing elements (2,5), (2,6) and (2,7). After formatting, the information is then routed to the data memory routing element (figure 47C). From the DMR the information is passed to the I/O router (figure 47B) and then supplied to the SMPTE-292M output element (figure 47 A).
  • This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
  • Figure 51 depicts the first of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes.
  • the output could also be R, G, B, H and V or other format.
  • This information is then passed to the data memory routing element (figure 48D).
  • the information is then passed to the input/output router (figure 48C) and then to the digital to analog converter (figure 48B).
  • the analog outputs are then provided to the component elements (figure 48A) for display processing.
  • Figure 52 depicts the second of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes.
  • the output could also be R, G, B, H and V or other format.
  • This information is then passed to the data memory routing element (figure 49D).
  • the information is then passed to the input/output router (figure 49C) and then to the digital to analog converter (figure 49B).
  • the analog outputs are then provided to the component elements (figure 49A) for display processing.
  • FIG 53 depicts the first of two output processed audio routes.
  • the audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding.
  • the number of channels can be up to eight per output port configuration.
  • processing elements (3,4) and (3,5) format the information for the data memory router (figure 50D). From the DMR the information is routed to the input/output router (figure 50C). From the input/output router the information is provided to a digital to analog converter (figure 50B). The information is then provided to the audio output channel element (figure 50A).
  • FIG. 54 depicts the second of two output processed audio routes.
  • the audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding.
  • the number of channels can be up to eight per output port configuration.
  • processing elements (3,6) and (3,7) format the information for the data memory router, (figure 5 ID). From the DMR the information is routed to the input/output router (figure 51C). From the input/output router the information is provided to a digital to analog converter (figure 5 IB). The information is then provided to the audio output channel element (figure 51 A). The information may bypass the digital to analog converter and be routed directly to the output element (figure 51 A) if an external digital audio decoder is utilized.
  • FIG. 55 depicts the Program Specific Information Protocol (PS IP) information file.
  • PS IP Program Specific Information Protocol
  • This information if included in the received stream, is processed by processing element (3,8).
  • the information is then routed to the data memory router element (figure 52D) and then passed to the data memory routing element (figure 40H) supporting the Ethernet interface.
  • the information is then passed to the input/output router (figure 40G) and then routed to the Ethernet interface (figure 40E) and onto the Ethernet IP stream.
  • Figure 56 depicts the processing of the first of two compressed video and associated audio elementary streams by processing elements (4,0 through 4,9 and 5,0 through 5,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 53C).
  • Figure 57 depicts the processing of the second of two compressed video and associated audio elementary streams by processing elements (6,0 through 6,9 and 7,0 through 7,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 54C).
  • Figure 58 depicts the ability to direct the video decoder to process the video in a scaled format.
  • the scaled format is described in Annex G of the H.264 documentation. Processing elements (8,0 through 8,9) and (9,0 through 9,5) are utilized for this process.
  • the video can be scaled in conjunction with the signal level (figure 39A) of a mobile application (figure 38A) input.
  • the decoder and encoder functions described can be combined on a single board using a unified parallel processor to create a new platform for transcoding by using a concurrent multi- standard decoder for decoding a signal and re-encoding the signal to a new format using a multi- standard concurrent encoder.
  • a concurrent multi-standard transcoder architecture the decoded video, audio and data channels are received in one standard and are formatted in an alternative standard.
  • the apparatus described can concurrently decode multiple standards and concurrently encode the signals in multiple alternative standards.
  • the transcoder apparatus receives the signal from an external source as described with respect to the decoder above.
  • the decoded information is routed on a common internal bus structure of the parallel processors to the encoding processing elements on the unified parallel processor for video, audio and data processing.
  • the architecture allows the replacement of alternative system data.
  • the alternative system data can include conditional access information, program specific information protocol "PSIP" information, separate system information and alternatively formatted closed captioning data.
  • PSIP program specific information protocol
  • the command and control of the transcoder additionally allows the processing of the incoming streams to the apparatus.
  • the incoming stream processing may include dropping unwanted packets or program services.
  • the transcoder platform architecture additionally allows the insertion of new video, audio and data information for processing by the encoding function.
  • a combination of the decoder functionality and the encoder functionality may be combined using a common internal bus structure of the processor to provide for a Transcoder.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Television Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un système pour coder des signaux comprenant un processeur parallèle et au moins une entrée couplée au processeur parallèle, et configuré pour recevoir un premier signal comprenant au moins un premier signal vidéo et un premier signal audio, et au moins une sortie couplée au processeur parallèle. Le processeur parallèle est configuré pour coder simultanément au moins le premier signal vidéo et le premier signal audio au moyen de deux standards vidéo et/ou deux standards audio et/ou deux débits de données, et émet un second signal comprenant au moins un second signal vidéo et un second signal audio.
PCT/US2011/022624 2010-01-26 2011-01-26 Codeur, decodeur et transcodeur multi-standard simultanes integres WO2011094346A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29844510P 2010-01-26 2010-01-26
US61/298,445 2010-01-26

Publications (1)

Publication Number Publication Date
WO2011094346A1 true WO2011094346A1 (fr) 2011-08-04

Family

ID=44319752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/022624 WO2011094346A1 (fr) 2010-01-26 2011-01-26 Codeur, decodeur et transcodeur multi-standard simultanes integres

Country Status (1)

Country Link
WO (1) WO2011094346A1 (fr)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5629988A (en) * 1993-06-04 1997-05-13 David Sarnoff Research Center, Inc. System and method for electronic image stabilization
WO1999024903A1 (fr) * 1997-11-07 1999-05-20 Bops Incorporated Procedes et appareils pour effectuer des operations mimd synchrones efficaces avec transmission ivliw entre processeurs paralleles
US20020131763A1 (en) * 2000-04-05 2002-09-19 David Morgan William Amos Video processing and/or recording
US20030108105A1 (en) * 1999-04-06 2003-06-12 Amir Morad System and method for video and audio encoding on a single chip
US6674741B1 (en) * 1996-05-20 2004-01-06 Nokia Telecommunications Oy High speed data transmission in mobile communication networks
US6792441B2 (en) * 2000-03-10 2004-09-14 Jaber Associates Llc Parallel multiprocessing for the fast fourier transform with pipeline architecture
US20040218094A1 (en) * 2002-08-14 2004-11-04 Choi Seung Jong Format converting apparatus and method
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US20070286275A1 (en) * 2004-04-01 2007-12-13 Matsushita Electric Industrial Co., Ltd. Integated Circuit For Video/Audio Processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5629988A (en) * 1993-06-04 1997-05-13 David Sarnoff Research Center, Inc. System and method for electronic image stabilization
US6674741B1 (en) * 1996-05-20 2004-01-06 Nokia Telecommunications Oy High speed data transmission in mobile communication networks
WO1999024903A1 (fr) * 1997-11-07 1999-05-20 Bops Incorporated Procedes et appareils pour effectuer des operations mimd synchrones efficaces avec transmission ivliw entre processeurs paralleles
US20030108105A1 (en) * 1999-04-06 2003-06-12 Amir Morad System and method for video and audio encoding on a single chip
US6792441B2 (en) * 2000-03-10 2004-09-14 Jaber Associates Llc Parallel multiprocessing for the fast fourier transform with pipeline architecture
US20020131763A1 (en) * 2000-04-05 2002-09-19 David Morgan William Amos Video processing and/or recording
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US20040218094A1 (en) * 2002-08-14 2004-11-04 Choi Seung Jong Format converting apparatus and method
US20070286275A1 (en) * 2004-04-01 2007-12-13 Matsushita Electric Industrial Co., Ltd. Integated Circuit For Video/Audio Processing

Similar Documents

Publication Publication Date Title
JP6793231B2 (ja) 受信方法
US6542518B1 (en) Transport stream generating device and method, and program transmission device
JP4240545B2 (ja) デジタルデータフォーマット変換及びビットストリーム生成のためのシステム
US9281011B2 (en) System and methods for encoding live multimedia content with synchronized audio data
US20030156342A1 (en) Audio-video synchronization for digital systems
WO2002061596A1 (fr) Procede et dispositif permettant de fournir des metadonnees synchronisees avec des contenus multimedia
US11895352B2 (en) System and method for operating a transmission network
WO2005091590A1 (fr) Appareils destines a preparer des trains de bits de donnees en vue d'une transmission cryptee
US8850590B2 (en) Systems and methods for using transport stream splicing for programming information security
CN101980541A (zh) 数字电视接收装置及其换台方法
CN108307202A (zh) 实时视频转码发送方法、装置及用户终端
KR100689474B1 (ko) 다중 화면을 제공하는 트랜스포트 스트림 수신 장치 및 그제어 방법
CN109040818B (zh) 直播时的音视频同步方法、存储介质、电子设备及系统
KR20040017830A (ko) Atsc 채널 상에 독립적으로 인코딩된 신호를 방송하기위한 시스템 및 방법
US20040190629A1 (en) System and method for broadcast of independently encoded signals on atsc channels
US20070058684A1 (en) Transparent methods for altering the video decoder frame-rate in a fixed-frame-rate audio-video multiplex structure
US20020080399A1 (en) Data processing apparatus, data processing method, data processing program, and computer-readable memory storing codes of data processing program
KR100881371B1 (ko) 무선 다중접속에 의한 실시간 동영상 전송장치, 무선다중접속에 의한 실시간 동영상 수신장치, 무선 다중접속에의한 실시간 동영상 송수신장치 및 무선 다중접속에 의한실시간 동영상 송수신 방법
WO2011094346A1 (fr) Codeur, decodeur et transcodeur multi-standard simultanes integres
CN107210041B (zh) 发送装置、发送方法、接收装置以及接收方法
WO2013017387A1 (fr) Procédés de compression et de décompression d'images animées
US10700799B2 (en) Method and apparatus for broadcast signal transmission
US20040161032A1 (en) System and method for video and audio encoding on a single chip
KR102001067B1 (ko) 방송 신호 송신을 위한 다중화 방법 및 그 장치
JP7007293B2 (ja) Mpeg-ts(トランスポートストリーム)対応データストリームを無線伝送するための伝送装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11737603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11737603

Country of ref document: EP

Kind code of ref document: A1