WO2020173183A1 - Parallel processing pipeline considerations for video data with portions designated for special treatment - Google Patents

Parallel processing pipeline considerations for video data with portions designated for special treatment Download PDF

Info

Publication number
WO2020173183A1
WO2020173183A1 PCT/CN2019/125429 CN2019125429W WO2020173183A1 WO 2020173183 A1 WO2020173183 A1 WO 2020173183A1 CN 2019125429 W CN2019125429 W CN 2019125429W WO 2020173183 A1 WO2020173183 A1 WO 2020173183A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
pixels
video signal
designated portion
signal
Prior art date
Application number
PCT/CN2019/125429
Other languages
French (fr)
Inventor
Jiong Huang
Laurence Thompson
Jeffrey STENHOUSE
Zhenxing ZHANG
Original Assignee
Huawei Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co., Ltd. filed Critical Huawei Technologies Co., Ltd.
Publication of WO2020173183A1 publication Critical patent/WO2020173183A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • H04N21/43632Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wired protocol, e.g. IEEE 1394
    • H04N21/43635HDMI
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/006Details of the interface to the display terminal
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/12Synchronisation between the display unit and other units, e.g. other display units, video-disc players
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/08Details of image data interface between the display device controller and the data line driver circuit
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/10Use of a protocol of communication by packets in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/12Use of DVI or HDMI protocol in interfaces along the display data pipeline
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Definitions

  • the disclosure generally relates to the processing of video data.
  • a typical video system will include a video source device, such as a set top box or DVD player, and a video sink device, such as a television or other display.
  • the video data will usually be transferred from the video source to the video sink over a cable, such as an HDMI (High-Definition Multimedia Interface) cable.
  • HDMI High-Definition Multimedia Interface
  • a video source or sink may implement parallel processing techniques.
  • Parallel processing works by dividing a stream of video pixel data into N number of parallel processing paths; where N is a number > 1 and is selected by the designer based on specific design considerations.
  • a primary design consideration is the rate at which digital processing logic in the source is sequenced or“clocked.”
  • Another consideration is whether the choice of N allows the pixel data to be processed in the same way in all the processing paths. This may simplify the design, compared to the case where different processing operations are performed in the different parallel processing paths.
  • a video source includes a signal provider and a source transmitter.
  • the signal provider is configured to provide a digital video signal including a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a designated portion, the designated portion beginning at a specified pixel location and of a specified number of pixels in length.
  • the source transmitter includes a serial to parallel converter, a parallel processor and a converter.
  • the serial to parallel converter is configured to: receive the video signal from the signal provider; perform a grouping of the pixels of the video signal into N pixel segments of adjacent pixels, where N is an integer greater than one, to convert the video signal into a parallel format; augment the designated portion by one or more pixels of the video signal adjacent to the designated portion to align an end of the designated portion with the grouping of the pixels when converted to parallel format; and provide the video signal in the parallel format.
  • the parallel processor is configured to process the video signal in parallel format, including encoding portions of the first blanking interval of the processed video signal other than the augmented designated portion.
  • the converter is configured to provide the compressed, processed video signal to a video sink device.
  • the one or more pixels of the video signal adjacent to the designated portion includes pixels both following and preceding the designated portion.
  • each of the frames include an active video portion and the video signal provided by the signal provider includes the active video portions in a compressed form.
  • the video source further includes a video decompressor configured to decompress the video portion of the provided video signal and provide a resultant signal to the serial to parallel converter.
  • the active video portions of the video signal provided by the signal provider are compressed in a Moving Picture Experts Group (MPEG) format.
  • MPEG Moving Picture Experts Group
  • the serial to parallel converter receives the video signal from the signal provider at a first clock rate and provides the video signal in the parallel format at a clock rate at 1/N the first clock rate.
  • the converter is configured to provide the compressed, processed video signal in an HDMI (High-Definition Multimedia Interface) format.
  • HDMI High-Definition Multimedia Interface
  • the converter is configured to provide the compressed, processed video signal asynchronously.
  • the designated portion is for copy protection processing.
  • the designated portion is for High-bandwidth Digital Content Protection (HDCP) processing.
  • HDCP High-bandwidth Digital Content Protection
  • the designated portion is part of a vertical synchronization period.
  • a video processing circuit includes a serial to parallel converter and a parallel processing circuit.
  • the serial to parallel converter is configured to: receive a video signal having a first clock rate, the video signal including a designated portion of a specified number of contiguous pixels; convert the video signal into N parallel data streams at a clock rate of 1/N the first clock rate, where N is an integer greater than one, by grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and prior to grouping the pixels, augment the designated portion by one or more pixels of the video signal to align an end of the designated portion with the grouping of the pixels.
  • the parallel processing circuit is configured to receive and concurrently process the N parallel data streams at a clock rate of 1/N the first clock rate.
  • a method of processing a video signal includes receiving a video signal that includes a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a corresponding designated portion beginning at a specified pixel location and of a specified number of pixels in length.
  • the method also includes designating a first set of one or more contiguous pixels adjacent to the designated portion of the video signal as being selectively assignable to the designated portion and converting the video signal into a parallel format of N parallel data streams, where N is an integer greater than one.
  • Converting the video signal into a parallel format of N parallel data streams includes: grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and augmenting the designated portion of the corresponding first blanking interval by one or more pixels from the first set of contiguous pixels adjacent to the designated portion to align end of the designated portion with the grouping of the pixels.
  • Embodiments of the present technology described herein provide improvements to the parallel processing of video data. This includes the grouping of pixels in a stream of video data so that specified portions of the stream are grouped together so that the pixels of a group are uniformly treated in the parallel processing.
  • a designated portion of sections of video frames that would otherwise be compressed when transmitted from a video source to a video sink can be augmented by a transition period of additional following pixels, additional preceding pixels, or both from the stream of data so that pixels from the designated portion are not grouped for parallel processing with pixels that will be compressed for subsequent transmission from the video source to the video sink.
  • FIG. 1 illustrates a video system including a video source connected to a video sink with a cable assembly.
  • FIG. 2 is a high level functional diagram of the source/sink interface for an FIDMI or other standard.
  • FIG. 3 is a block diagram of an embodiment for the video source transmitter introduced in FIG. 1.
  • FIG. 4 is a schematic illustration of the structure of a frame of video data, illustrating the designated portions of the video blanking period, as specified by the interface standard
  • FIG. 5 illustrates an example of the placement of a designated portion within the blanking interval.
  • FIG. 6 schematically compares single pipe (non-parallel) processing, 2x parallel processing, and 4x parallel processing.
  • FIG. 7 illustrates the non-alignment of the portion designated for no Repeat Count (RC) encoding within the sub-division of the blanking interval for 4x parallel processing.
  • FIG. 8 illustrates the incorporation of a transition period before and after the designated portion that can be used to augment the designated portion.
  • RC Repeat Count
  • FIG. 9 is a more generalized illustration to show the use of the transition period for an arbitrary condition when the beginning of the designated portion does not align with the grouping of pixels.
  • FIG. 10 is a high-level flow diagram that is used to summarize methods for handling specified portions of a stream of video data, such as those designated for no RC encoding, when converting video data to a parallel format.
  • the present disclosure will now be described with reference to the figures, which in general relate to the processing and transmission of video signals.
  • the data can be compressed.
  • the part of a video not containing active video data such as the blanking intervals
  • the blanking intervals include a designation portion that is specified not to be compressed.
  • HDMI High-bandwidth Digital Content Protection
  • the blanking intervals of a frame may have a specified range of pixels starting a specified pixel location and designated as non-compressible.
  • a video source Before transmitting a video signal to a video sink, a video source will perform processing on the video signal; and, in order to more readily handle the high data rate, the video source will often employ parallel processing.
  • the data stream For parallel processing of the stream of video data, the data stream is sub-divided up into contiguous sets of pixels of a size corresponding to the number of parallel processing pipelines.
  • this sub-division of the data stream may not line up with one or both of the end and beginning of the designated portion of the video data stream, which can complicate the data processing.
  • the video source can augment the designation portion with a transition period of additional adjacent pixels at the end of the designation portion, the beginning of the designated portion, or both.
  • FIG. 1 is used to describe an embodiment for some components of a video system.
  • the video system of FIG. 1 includes a video source 1 10 that provides a video signal to a video sink 130.
  • a video sink 130 are a television, monitor, other display device or end use device, or may be an amplifier or repeater that would in turn act as a video source for a subsequent element in a video system.
  • a receiver circuit Sink Rx 131 receives the signal form the video source 1 10, where the Sink Rx 131 can include an equalization circuit and other interface elements for the signal received from the video source 1 10 over a cable or other connector.
  • the video source 110 provides the video signal to the video sink 130 from a transmitter circuit Source Tx 1 19.
  • a video source 110 are a set top box, a DVD, Blu-Ray or other media player, or video camera.
  • the video source can be any system-level product to provide a digital video signal, which may be compressed digital video, but may not be compressed video, depending on the embodiment.
  • the video will generally be compressed, such as according to an MPEG (Moving Picture Experts Group) standard, but in an embodiment for a digital video camera the provided signal is typically not compressed.
  • MPEG Motion Picture Experts Group
  • the video signal is provided by the signal provided 1 11.
  • the signal provider 1 1 1 would read the media to provide the video data.
  • the video signal is received at a receiver circuit or interface for the signal provider 1 1 1.
  • the set-top box might receive a video signal from a cable provider over a coaxial cable, where the video signal is compressed and encoded according to an MPEG (Moving Picture Experts Group) standard, such as MPEG-4, or other compression algorithm.
  • MPEG Motion Picture Experts Group
  • the stream of received video data can be decompressed at the video decompression block 112 to generate a baseband (i.e., uncompressed) digital video/audio signal.
  • the video decompression block 1 12 need not be included in the video source device 110.
  • the video source 1 10 can then perform processing on the decompressed stream of video data.
  • the video data may be encrypted in some embodiments, formed into packets, have error correction information added, or other have operations performed upon it.
  • this can include functionality to comply with the requirements of an interface standard, such as HDMI, to transmit the video signal to the sink device 130 over the cable assembly 121 as performed in the Source TX 1 19.
  • an interface standard such as HDMI
  • parallel processing can be used in the Source Tx 119, with the data stream converted into a parallel format as discussed below with respect to FIG. 3.
  • the video signal can be transmitted from the video source 1 10 to the video sink 130 over a cable assembly 121 , of which there are a number of formats such as component video cable, VGA (Video Graphics Array) cables, or FIDMI cables, where the FIDMI example will be used as the main embodiment for the following discussion.
  • An FIDMI cable assembly 121 will be a cable with plugs or connectors 125 and 127 on either end. The plugs 125 and 127 can plug into corresponding sockets or receptacles 123 and 129 to provide video data from the Source Tx 119 to the Sink Rx 131.
  • the video data as received at the video source 1 10 will have the active video (i.e., the pixels of the image to be provided on a television or other display) compressed, but the video data transmitted over the cable assembly 121 to the video sink 130 can have uncompressed or compressed active video portions.
  • the active video may be DSC (Display Stream Compression) compressed, which is a visually lossless low-latency compression algorithm.
  • FIG. 2 is a high level functional diagram of the source/sink interface for an FIDMI or other standard.
  • the Source Tx 219 On the left is the Source Tx 219, the arrows in the center represent the signals that are carried in the cable assembly 221 , and on the right is the Sink Rx 231.
  • the video data is transferred over the data lanes, where there can be a number of such lanes to provide high data transfer rates.
  • the shown embodiment has four data lanes, but other embodiments can have more or fewer data lanes.
  • the interface may also be operable in different modes, where less than all of the available data lanes are used in some modes if, for example, the interface is operating at a lower data rate or to provide back-compatibility with earlier versions of a standard.
  • a high data rate four lane mode could use all of the provided data channels, while a three lane mode can be provided for back compatibility to an earlier version of a standard by disabling one of the lanes.
  • the video source on the Source Tx 219 side can configure the link to operate at different bit rates using a fixed rate link.
  • the cable assembly can also have a number of control lines for the exchange of control signals over the source/sink link.
  • FIG. 3 is a block diagram of an embodiment for a Source TX 319, such as the Source TX 1 19 video source 1 10 introduced in FIG. 1.
  • the Source Tx 319 includes a converter 345 configured to convert the output of the preceding parallel processing section 343 to multiple lanes and transmit the signal to drive a cable assembly whose plug would attach at the socket or receptacle 323.
  • the Source Tx 319 receives the baseband digital video/audio signal form from the signal provider 1 11 (after video decompressed, if needed, at video decompression block 112) at the serial to parallel converter 341.
  • parallel processing can be used, with the data stream converted into a parallel format in the serial to parallel converter 341 before being provided to the parallel processing block 343 where parallel streams of video data are constructed and encrypted according to standard of the interface for transfer over the cable assembly from the socket 323.
  • the video data has been decompressed at video decompression block 1 12 and the video data will be transferred from the video source 1 10 to a video sink with the active video portions of its video data either uncompressed or compressed, to further increase the transfer utilization, the portion of the video data other than the active video can be compressed in a process of Repeat Count (RC) encoding, as described further below.
  • RC Repeat Count
  • the RC encoding, or other encoding applied to the processed video data prior to being transmitted from the Source Tx 319, can performed as part of the parallel processing in block 343.
  • the signal video data is to have compression applied to its active video portions, such as DSC compression, this can also be performed as part of the parallel processing block 343 operations.
  • the various components of the video source 1 10 illustrated in FIG. 1 and Source Tx 1 19/319 in FIGs. 1 and 3 can be implemented in hardware, firmware, software, or various combinations of these; and although represented as separate blocks in FIGs. 1 and 3, an actual implementation may combine various elements.
  • some elements, such as the video decompression block 1 12 can be implemented as an application specific integrated circuit (ASIC), with others, such parallel processing block 343 and serial to parallel conversion 341 or converter 345, implemented together through software on more general processing circuitry.
  • ASIC application specific integrated circuit
  • FIG. 4 is a schematic illustration a structure of a frame of video data.
  • a video image When a video image is displayed on a television or other display, the image is formed of a series of rows of pixels presented in a sequence of frames.
  • these pixels of active video 401 are represented by the lines with the diagonal hashing. Additionally, a preceding portion of each of these lines of active data pixels and a preceding number of lines are“blanking intervals”, which is a portion of the frame not typically displayed. Theses blanking intervals are formed of a number of “blank pixels”.
  • pixel is sometimes used to refer to only the“active pixels” of the active video portion that is displayed, but as used here unless further qualified “pixel” can refer to either an“active pixel” of the active video portion or a“blank pixel” from a blanking interval. (In particular, although more generally applicable, the following discussion is primarily focused on the blank pixels of the blanking intervals.) The origin and much of the terminology related to these blanking intervals is historical, from when televisions used cathode ray tubes that were illuminated by moving beams of electrons very quickly across the screen. Once the beam reached the edge of the screen, the beam was switched off and the deflection circuit voltages (or currents) are returned to the values they had for the other edge of the screen.
  • FIG. 4 depicts a single frame of video, such as is transmitted from the Source Tx 319 over an HDMI interface.
  • a single frame is transmitted typically in 1/60 second.
  • the frame of FIG. 4 shows 2 basic time periods within the frame, corresponding to the active video and blanking intervals.
  • the active period typically uses about 85% of the frame’s content, but FIG. 4 is drawn to focus on the blanking periods and is not drawn in proportion to the frame time.
  • FIG. 4 also illustrates the horizontal synchronization pulse, or Hsync, that separates the horizontal lines, or scan lines, of a frame.
  • the horizontal sync signal is a single short pulse which indicates the start of every line, after which follows the rest of the scan line.
  • the vertical synchronization pulse, or Vsync is also shown and is used to indicate the beginning of a frame or, in embodiment where a frame is made up of alternating fields, to separate the fields.
  • the vertical sync pulse occurs within the vertical blanking interval.
  • the vertical sync pulse occupies the whole line intervals of a number of lines at the beginning and/or end of a scan when there is no active video.
  • control period Repeat Count or RC encoding or compression.
  • This RC encoding is a type of“run length” encoding differs from what is commonly meant by“video compression”, which usually refers to the compression of the active video part of the video frame, rather than the blanking periods, and examples of which are MPEG or DSC type compression.
  • RC encoding is a form of compression applied to the blank pixels, in the present discussion this will be referred to primarily as“RC encoding”, rather than“RC compression”, although this latter terminology may also be used elsewhere. More generally, although the discussion here is presented in the context of RC encoding, it can also be applied to other encodings for video processing operations where it can be useful to treat contiguous groups of pixels (both blank and active pixels) grouped for parallel processing operations. [0044] To indicate whether portions of the blanking intervals are compressed through RC encoding, and the degree to which they are compressed, in some embodiments a Repeat Count indicator can be used.
  • This encoding can be performed in the parallel processing block 343 of Source Tx 319. This information can be included in the packets when the video data is formed up into packets for transmission over cable assembly to the video sink, where the RC encoded portions can be expanded back out be repeating an RC compressed portion the specified number of times.
  • the blanking intervals can include portions that are designated not to be compressed, which are represented in black (such as 407), and can be used to support auxiliary data transmission.
  • a portion of the blanking interval that is not to be RC compressed is when High-bandwidth Digital Content Protection, or HDCP, is incorporated into HDMI, where the blanking interval includes a designated potion, or pixel location, beginning at a specified pixel number and of a specified number of pixels in length.
  • HDCP High-bandwidth Digital Content Protection
  • a designated portion of the blanking interval that is designated to be uncompressed to support HDCP (data encryption) processing is shown at 403, with adjacent pixels added in from before and after transitions portions 405 that are not designated to be compressed but that may or may not be compressed.
  • HDCP data encryption
  • the following uses an example of an HDCP 1.4 or 2.2 and HDMI 2.1 embodiment for the following discussion, but the techniques can more generally be applied to other embodiments where the blanking interval or other portion of the video signal is subject to encoding, but includes portions that are not to be encoded.
  • Both HDCP 1.4 and HDCP 2.2 define a Keep-Out period, for total 142 pixel clock cycles, at pixel location from Pixel 508 to Pixel 650, referenced to the Vsync signal’s leading edge.
  • the HDMI 2.1 Specification specifies that a video source shall not use RC encoding in the Keep-Out region, regardless of whether or not HDCP is supported.
  • the HDMI 2.1 Specification also specifies that for all other control periods, video sources shall utilize and maximize RC encoding when a particular set of blanking interval data (including Hsync and Vsync) all remain unchanged for 2 or more periods.
  • FIG. 5 illustrates this situation.
  • FIG. 5 illustrates an example of the placement of a designated portion with the blanking interval.
  • the pixel clock signal pixel_clk
  • the Vsync is high during the Vsync period and is otherwise low.
  • the pixel count for the pixel clock signal is aligned to start at Pixel 0, taken as the leading edge of the Vsync signal.
  • the HDCP keep out period corresponds to the designated portion in this example and is illustrated in the third line, beginning at pixel location Pixel 508 (relative to the leading edge of the Vsync signal) when the signal goes high and extending for 142 pixel counts to Pixel 650 when the signal goes low.
  • These pixel locations are specified values, where the keep out period is designated to not be compressed to provide time that may be needed for HDCP related operations on the video source and sink.
  • the bottom line illustrates the RC encoding requirements for the HDMI 2.1 Specification, where the portions other than the HDCP Keep-Out period are to be RC compressed, while the RC encoding is off for the HDCP Keep-Out period.
  • the signal waveforms of FIG. 5 are from a frame of the baseband digital video/audio data after it has undergone video decompression in block 1 12, but prior RC encoding in Source Tx 1 19.
  • parallel processing can be used, which requires that the decompressed video data stream to be converted to a parallel format.
  • the structure of the designated period of FIG. 5 can lead to complications when converting the data stream into a parallel format, as can be illustrated with respect to FIG. 6 and FIG. 7.
  • Parallel processing is common practice for high speed digital hardware designs.
  • the rate at which pixel and blanking data is to be processed in an HDMI 2.1 video source may be higher than what some common semiconductor processes can support.
  • the bandwidth requirement is increased to 48Gbps, which is a 3x increased compared with HDMI 2.0, so that a parallel design would be used if the rate of digital logic is to be maintained or to be effectively reduced. Consequently, to be able to process the stream of video data at a sufficient rate, the embodiment of a Source Tx 319 illustrated in FIG. 3 using parallel processing at block 343.
  • FIG. 6 schematically compares single pipe (i.e., non-parallel) processing, 2x parallel processing, and 4x parallel processing.
  • the basic data pixel clock used for processing below which is a stream of pixels, where 8 clock cycles corresponds to 8 pixels of data (e.g., pO to p7).
  • the pixel clock signal (clock_div2) is at half the rate of single pipe pixel clock, but the same data rate is maintained by breaking up video stream into pair of pixels, with a set of two pixels being processed in each clock cycle.
  • 16 pixels of data (e.g., pO to p15) can be processed.
  • the pixel clock signal clock_div4 runs at quarter speed relative to the single pipe, the stream of pixels is broken up sets of four, and 16 pixels of data (e.g., pO to p15) can be processed in 4 clock_div4 cycles.
  • the pixels are grouped into sets of contiguous pixels of a size corresponding to the degree of parallelism (i.e., number of pipelines) used.
  • the degree of parallelism i.e., number of pipelines
  • the HDCP Keep-Out period has a specified duration of 142 pixels for the designated portion that is not to have RC encoding.
  • 142-cycle of no encoding does not work well with parallel pipelining, because 142 is a difficult number for parallelism as it is not a multiple of 4, 6, and 8, which are typical numbers used for parallelism. Because of this, the special handling of the 142 period in a parallel design can lead to a complexity increase for HDMI sources.
  • FIG. 7 illustrates the non-alignment of the portion designated for no RC encoding with the sub-division of the blanking interval for 4x parallel processing.
  • the top three lines (Pixel Clock, Vsync, HDCP Keep-Out) correspond to top three lines of FIG. 5 and the bottom line (HDMI 2.1 Requirements) corresponds to the bottom line of FIG. 5.
  • the bottom line (HDMI 2.1 Requirements) corresponds to the bottom line of FIG. 5.
  • video blanking data numbered starting from 0 at the leading edge of the Vsync signal.
  • the video blanking data is grouped in 4-data-pack sets for a 4x parallel pipeline, where, in this example Pixel 0 has phase 0— that is, the grouping is aligned at the leading edge of the Vsync signal so that the first group is for pixels pO, p1 , p2, and p3.
  • This grouping continues where, since 508 is divisible by four, pixels p508, p509, p510 and p51 1 are grouped together. Consequently, the grouping aligns with the RC encoding ON, RC encoding OFF transition.
  • the designated portion ends at pixel p649, so that pixels p648 and p649 are in the RC encoding OFF portion, while the next two pixels of p650 and p651 on in the RC encoding ON portion.
  • the grouping of pixels for 4x parallel processing results in these pixels all being formed into the same 4-data group, so that a single 4-data group of pixels straddles the RC encoding OFF to ON transition. Consequently, within this last 4-data group, there is a predictable encoding OFF to ON change.
  • To disable this RC encoding for the exact 142 cycles introduces complexity at this boundary as one part of the 4-data group is to be compressed while the other part is not.
  • the following embodiments introduce a “soft” switching for RC encoding ON/OFF near the boundary of FIDCP Keep-Out area, instead of the “hard” switching described above that can result in implementation complexity.
  • a transition period of several pixel clock cycles can be added to the transition period immediately before the HDCP Keep-Out period, immediately after the HDCP Keep-out period, or both. For example, by being able to selectively add up to 8 immediately preceding pixels and up to 8 immediately following pixels to the designated RC encoding off region, both ends of the designated portion can be aligned in data groupings for up to 8x parallel processing.
  • a number of pixels (such as up to N) preceding and following adjacent pixels can be selectively reassigned to the designation region for up to N x parallel processing.
  • the RC encoding process remains uniform for each data grouping. This process affects only the video source during the conversion of the video data to a parallel format and has no impact on the video sink device.
  • the expansion or augmentation of the designated area can be illustrated with respect to FIG. 8.
  • FIG. 8 illustrates the incorporation of a transition period of adjacent to the designated portion that can be used to augment the designated portion.
  • RC encoding ON/OFF switching happens to be aligned with 4x group boundary.
  • a transition period of adjacent pixels that can be reassigned to the the RC encoding OFF region from adjacent pixrels after and/or before the RC encoding OFF region as illustrated in th“Update” line.
  • Source Tx 319 to extend the desiganted RC encoding OFF from 142 pixel clock cycles up 142 + 2 * Delta pixel clock cycles.
  • the designated RC encoding OFF region is extended by two pixel clocks so that the transition now falls on Pixel 652, which is a multiple of four.
  • This transition period before and after the designated area will conceptually extend the size of RC Compession OFF region from 142 periods to a multiple of the number of pipelines.
  • RC encoding remains off for 144 periods.
  • FIG. 9 is a more generalized illustration to show the use of the transition period for an arbitrary condition when the beginning of the designated portion does not align with the grouping of pixels.
  • the video blanking data starting with Pixel 0 aligned with the leading edge of the Vsync signal.
  • the grouping of pixels into sets of four is not aligned with the leading edge of the Vsync signal, but instead is at phase 3 so that the grouping aligns the three pixels prior to the Vsync leading edge at pixel, or Pixel -3 in this example.
  • FIG. 9 is a more generalized illustration to show the use of the transition period for an arbitrary condition when the beginning of the designated portion does not align with the grouping of pixels.
  • this off-set is represented by the vertical broken lines, such as shown at clock cycle 1.
  • Pixel 0 is now grouped with Pixels -3, -2, and -1. This grouping continues on through the stream of video blanking data, where some of the grouping boundaries around the RC encoding ON/OFF and RC encoding OFF/ON transitions are also represented by vertical broken lines.
  • the last pixel of the designated portion is Pixel 649, which now has Phase 0 in another 4- pixel group of Pixels 649, 650, 651 , and 652.
  • Consequenlty at both the RC encoding ON/OFF transition and the RC encoding OFF/ON transition there is a 4- pixel grouping that inlcudes both pixels that are to be RC compressed and that are not to be RC compressed (indicated by the stippling).
  • the video source may disable the RC encoding for these 4-pixel groupings with mixed RC encoding status.
  • FIG. 10 is a high-level flow diagram that is used to summarize methods for handling specified portions of a steam of video data, such as those designated for no RC encoding, when converting video data to a parallel format.
  • the flow of FIG. 10 illustrates the process of FIGs. 8 and 9 and is described with respect to the embodiment for a video source 1 10 as illustrated in FIG. 1 and a Source Tx 319 as illustrated in FIG. 3.
  • the video source 110 receives a video signal from the signal provider 1 11.
  • the signal can include active video regions and a designated portion requiring special treatment, such as in the blanking interval for the examples above for the incorporation of HDCP processing into HDMI signals.
  • the received signal is in compressed form, such as the active video being compressed in an MPEG format, the video signal is decompressed by the video decompression block 1 12 to provide the stream of uncompressed video data.
  • the baseband video signal is then passed on to the logic of the Source Tx 319 to perform processing as specified, in this example, for an HDMI source in preparation for transmission over an HDMI interface.
  • the baseband signal will be organized into N parallel processing paths, with each path clocked at 1/N the rate of the incoming pixel rate.
  • the Source Tx 319 can designate a following set of pixels as selectively assignable to the designated portion at 1005 and can designate a preceding set of pixels as selectively assignable to the designated portion at 1007. Both of 1005 and 1007, as well as the following process of 1008 and its components 1009, 101 1 , and 1013 can be performed in the serial to parallel conversion block 341.
  • the grouping of the pixels is then performed at 1013.
  • 1005 and 1009, and 1007 and 101 1 are shown as separate pairs of steps for purposes of discussion, but in an actual implementation different embodiments can combine these into single operations. Additionally, although these elements of FIG. 10 are shown in a particular order, in some embodiments they can be performed in different orders or even concurrently.
  • parallel processing of the signal in parallel processing block 343 is performed at 1015.
  • the blanking intervals are to be RC compressed
  • the converter 345 can then transmit the video signal with RC compressed blanking intervals over a cable structure from the socket 323 to a video sink.
  • a transition period for the HDMI HDCP Keep-Out or other designated portion of adjacent pixels of a video signal can be reassigned before and after the designated portion for soft and flexible switching of RC encoding status.
  • a transition period of a number of pixels such as up to N pixels
  • This can reduce the complexity for parallelism up to N pipelines.
  • the updates involved in these techniques can be limited to the video source and have no impact to a video sink.
  • the source transmitter on a source device can process the baseband video signal using the parallel processing structure, in accordance with the rules of the HDMI specification.
  • the source transmitter can determine the portion of the video signal designated as an HDCP-keep-out area, and the portions that are Auxiliary Data periods.
  • the source transmitter can then enable blanking, or RC encoding for the portions of the blanking interval that are not designated Auxiliary Data periods and are not designated HDCP keep-out-period.
  • the source transmitter can apply the exception presented above: disable blanking encoding at a point in the pixel stream of some number of pixel periods prior to the start of the HDCP-keep-out area, and which aligns with the parallel processing structure, such that all N pixels within the N processing paths have blanking encoding disabled; and disable blanking encoding until a point in the pixel stream a number of pixel periods after the end of the HDCP-keep-out area, and which aligns with the parallel processing structure, such that all N pixels within the N processing paths have blanking encoding disabled.
  • a connection may be a direct connection or an indirect connection (e.g., via one or more other parts).
  • the element when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements.
  • the element When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element.
  • Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

As video goes to higher levels of definition, the rate at which video data needs to be processed increases. To be able to process video data more rapidly, a video source can use parallel processing, where pixels are grouped into sets of N pixels for N processing pipelines. The video data may include designated sections that require special treatment. For example, when supporting High-bandwidth Digital Content Protection (HDCP) in an HDMI (High-Definition Multimedia Interface) signal, the blanking intervals of a frame of video data may have a specified range of pixels designated as non-compressible for subsequent Repeat Count (RC) encoding. To avoid a set of pixels grouped together for parallel processing including pixels both within and outside of the designated portion, the video source can augment the designated portion with a number of preceding and/or following adjacent pixels.

Description

PARALLEL PROCESSING PIPELINE CONSIDERATIONS FOR VIDEO DATA
WITH PORTIONS DESIGNATED FOR SPECIAL TREATMENT
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims priority to U.S. provisional patent application Serial No. 62/81 1 ,034, filed on February 27, 2019 and entitled“Parallel Processing Pipeline Considerations For Video Data With Portions Designated For Special Treatment”, which is incorporated herein by reference as if reproduced in its entirety.
FIELD
[0002] The disclosure generally relates to the processing of video data.
BACKGROUND
[0003] A typical video system will include a video source device, such as a set top box or DVD player, and a video sink device, such as a television or other display. The video data will usually be transferred from the video source to the video sink over a cable, such as an HDMI (High-Definition Multimedia Interface) cable. As television signals move to higher and higher levels of definition, the amount of video data and the rate of video data transfer between the source and the sink increases, as does the rate at which any data processing on the video source or sink is performed. To handle the increased rate of data processing, a video source or sink may implement parallel processing techniques. Parallel processing works by dividing a stream of video pixel data into N number of parallel processing paths; where N is a number > 1 and is selected by the designer based on specific design considerations. A primary design consideration is the rate at which digital processing logic in the source is sequenced or“clocked.” Another consideration is whether the choice of N allows the pixel data to be processed in the same way in all the processing paths. This may simplify the design, compared to the case where different processing operations are performed in the different parallel processing paths.
BRIEF SUMMARY
[0004] According to a first aspect of the present disclosure, a video source includes a signal provider and a source transmitter. The signal provider is configured to provide a digital video signal including a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a designated portion, the designated portion beginning at a specified pixel location and of a specified number of pixels in length. The source transmitter includes a serial to parallel converter, a parallel processor and a converter. The serial to parallel converter is configured to: receive the video signal from the signal provider; perform a grouping of the pixels of the video signal into N pixel segments of adjacent pixels, where N is an integer greater than one, to convert the video signal into a parallel format; augment the designated portion by one or more pixels of the video signal adjacent to the designated portion to align an end of the designated portion with the grouping of the pixels when converted to parallel format; and provide the video signal in the parallel format. The parallel processor is configured to process the video signal in parallel format, including encoding portions of the first blanking interval of the processed video signal other than the augmented designated portion. The converter is configured to provide the compressed, processed video signal to a video sink device.
[0005] Optionally, in a second aspect and in furtherance of the first aspect, wherein the one or more pixels of the video signal adjacent to the designated portion includes pixels both following and preceding the designated portion.
[0006] Optionally, in a third aspect and in furtherance of any of the first and second aspects, each of the frames include an active video portion and the video signal provided by the signal provider includes the active video portions in a compressed form. The video source further includes a video decompressor configured to decompress the video portion of the provided video signal and provide a resultant signal to the serial to parallel converter.
[0007] Optionally, in a fourth aspect and in furtherance of the third aspect, the active video portions of the video signal provided by the signal provider are compressed in a Moving Picture Experts Group (MPEG) format.
[0008] Optionally, in a fifth aspect and in furtherance of any the first to fourth aspects, the serial to parallel converter receives the video signal from the signal provider at a first clock rate and provides the video signal in the parallel format at a clock rate at 1/N the first clock rate.
[0009] Optionally, in a sixth aspect and in furtherance of any of the preceding aspect, the converter is configured to provide the compressed, processed video signal in an HDMI (High-Definition Multimedia Interface) format.
[0010] Optionally, in a seventh aspect and in furtherance of the preceding aspect, the converter is configured to provide the compressed, processed video signal asynchronously. [0011] Optionally, in an eighth aspect and in furtherance to any of the preceding aspects, the designated portion is for copy protection processing.
[0012] Optionally, in a ninth aspect and in furtherance of the preceding aspect, the designated portion is for High-bandwidth Digital Content Protection (HDCP) processing.
[0013] Optionally, in a tenth aspect and in furtherance to any of the preceding aspects, the designated portion is part of a vertical synchronization period.
[0014] According to another aspect of the present disclosure, a video processing circuit includes a serial to parallel converter and a parallel processing circuit. The serial to parallel converter is configured to: receive a video signal having a first clock rate, the video signal including a designated portion of a specified number of contiguous pixels; convert the video signal into N parallel data streams at a clock rate of 1/N the first clock rate, where N is an integer greater than one, by grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and prior to grouping the pixels, augment the designated portion by one or more pixels of the video signal to align an end of the designated portion with the grouping of the pixels. The parallel processing circuit is configured to receive and concurrently process the N parallel data streams at a clock rate of 1/N the first clock rate.
[0015] According to one other aspect of the present disclosure, a method of processing a video signal includes receiving a video signal that includes a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a corresponding designated portion beginning at a specified pixel location and of a specified number of pixels in length. The method also includes designating a first set of one or more contiguous pixels adjacent to the designated portion of the video signal as being selectively assignable to the designated portion and converting the video signal into a parallel format of N parallel data streams, where N is an integer greater than one. Converting the video signal into a parallel format of N parallel data streams includes: grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and augmenting the designated portion of the corresponding first blanking interval by one or more pixels from the first set of contiguous pixels adjacent to the designated portion to align end of the designated portion with the grouping of the pixels.
[0016] Embodiments of the present technology described herein provide improvements to the parallel processing of video data. This includes the grouping of pixels in a stream of video data so that specified portions of the stream are grouped together so that the pixels of a group are uniformly treated in the parallel processing. In one set of embodiments, a designated portion of sections of video frames that would otherwise be compressed when transmitted from a video source to a video sink can be augmented by a transition period of additional following pixels, additional preceding pixels, or both from the stream of data so that pixels from the designated portion are not grouped for parallel processing with pixels that will be compressed for subsequent transmission from the video source to the video sink.
[0017] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements.
[0019] FIG. 1 illustrates a video system including a video source connected to a video sink with a cable assembly.
[0020] FIG. 2 is a high level functional diagram of the source/sink interface for an FIDMI or other standard.
[0021] FIG. 3 is a block diagram of an embodiment for the video source transmitter introduced in FIG. 1.
[0022] FIG. 4 is a schematic illustration of the structure of a frame of video data, illustrating the designated portions of the video blanking period, as specified by the interface standard
[0023] FIG. 5 illustrates an example of the placement of a designated portion within the blanking interval.
[0024] FIG. 6 schematically compares single pipe (non-parallel) processing, 2x parallel processing, and 4x parallel processing.
[0025] FIG. 7 illustrates the non-alignment of the portion designated for no Repeat Count (RC) encoding within the sub-division of the blanking interval for 4x parallel processing. [0026] FIG. 8 illustrates the incorporation of a transition period before and after the designated portion that can be used to augment the designated portion.
[0027] FIG. 9 is a more generalized illustration to show the use of the transition period for an arbitrary condition when the beginning of the designated portion does not align with the grouping of pixels.
[0028] FIG. 10 is a high-level flow diagram that is used to summarize methods for handling specified portions of a stream of video data, such as those designated for no RC encoding, when converting video data to a parallel format.
DETAILED DESCRIPTION
[0029] The present disclosure will now be described with reference to the figures, which in general relate to the processing and transmission of video signals. To increase the rate at which data can be transferred between a video source and a video sink, the data can be compressed. In some standards, such as in examples of HD ! (High-Definition Multimedia Interface) standards, the part of a video not containing active video data, such as the blanking intervals, are compressed to increase the transfer efficiency. In some cases, however, the blanking intervals include a designation portion that is specified not to be compressed. For example, when supporting High-bandwidth Digital Content Protection (HDCP) in HDMI, the blanking intervals of a frame may have a specified range of pixels starting a specified pixel location and designated as non-compressible. Before transmitting a video signal to a video sink, a video source will perform processing on the video signal; and, in order to more readily handle the high data rate, the video source will often employ parallel processing. For parallel processing of the stream of video data, the data stream is sub-divided up into contiguous sets of pixels of a size corresponding to the number of parallel processing pipelines. However, this sub-division of the data stream may not line up with one or both of the end and beginning of the designated portion of the video data stream, which can complicate the data processing. To alleviate this problem, the video source can augment the designation portion with a transition period of additional adjacent pixels at the end of the designation portion, the beginning of the designated portion, or both.
[0030] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that scope of the claims should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.
[0031] Before providing additional details of techniques for handling specified segments of video data for parallel processing, FIG. 1 is used to describe an embodiment for some components of a video system. The video system of FIG. 1 includes a video source 1 10 that provides a video signal to a video sink 130. Examples of a video sink 130 are a television, monitor, other display device or end use device, or may be an amplifier or repeater that would in turn act as a video source for a subsequent element in a video system. On the video sink 130, a receiver circuit Sink Rx 131 receives the signal form the video source 1 10, where the Sink Rx 131 can include an equalization circuit and other interface elements for the signal received from the video source 1 10 over a cable or other connector.
[0032] The video source 110 provides the video signal to the video sink 130 from a transmitter circuit Source Tx 1 19. Some examples of a video source 110 are a set top box, a DVD, Blu-Ray or other media player, or video camera. The video source can be any system-level product to provide a digital video signal, which may be compressed digital video, but may not be compressed video, depending on the embodiment. For example, in the case of a Blu-Ray player, the video will generally be compressed, such as according to an MPEG (Moving Picture Experts Group) standard, but in an embodiment for a digital video camera the provided signal is typically not compressed.
[0033] In the video source 1 10, the video signal is provided by the signal provided 1 11. In the example of the DVD or other media player, the signal provider 1 1 1 would read the media to provide the video data. In the example of a set-top box or other device that receives the video signal over a cable or other connector, the video signal is received at a receiver circuit or interface for the signal provider 1 1 1. For example, in a set-top box embodiment of a video source 110, the set-top box might receive a video signal from a cable provider over a coaxial cable, where the video signal is compressed and encoded according to an MPEG (Moving Picture Experts Group) standard, such as MPEG-4, or other compression algorithm. [0034] As the received video signal will often be compressed, such as in with an MPEG-type compression, the stream of received video data can be decompressed at the video decompression block 112 to generate a baseband (i.e., uncompressed) digital video/audio signal. Depending on the embodiment, in some cases (such as a video camera) where video decompression is not needed, the video decompression block 1 12 need not be included in the video source device 110. The video source 1 10 can then perform processing on the decompressed stream of video data. For example, in addition to image processing, the video data may be encrypted in some embodiments, formed into packets, have error correction information added, or other have operations performed upon it. Among other processing, this can include functionality to comply with the requirements of an interface standard, such as HDMI, to transmit the video signal to the sink device 130 over the cable assembly 121 as performed in the Source TX 1 19. To handle higher data rates, parallel processing can be used in the Source Tx 119, with the data stream converted into a parallel format as discussed below with respect to FIG. 3.
[0035] The video signal can be transmitted from the video source 1 10 to the video sink 130 over a cable assembly 121 , of which there are a number of formats such as component video cable, VGA (Video Graphics Array) cables, or FIDMI cables, where the FIDMI example will be used as the main embodiment for the following discussion. An FIDMI cable assembly 121 will be a cable with plugs or connectors 125 and 127 on either end. The plugs 125 and 127 can plug into corresponding sockets or receptacles 123 and 129 to provide video data from the Source Tx 119 to the Sink Rx 131. In a common embodiment, the video data as received at the video source 1 10 will have the active video (i.e., the pixels of the image to be provided on a television or other display) compressed, but the video data transmitted over the cable assembly 121 to the video sink 130 can have uncompressed or compressed active video portions. For example, the active video may be DSC (Display Stream Compression) compressed, which is a visually lossless low-latency compression algorithm.
[0036] FIG. 2 is a high level functional diagram of the source/sink interface for an FIDMI or other standard. On the left is the Source Tx 219, the arrows in the center represent the signals that are carried in the cable assembly 221 , and on the right is the Sink Rx 231. The video data is transferred over the data lanes, where there can be a number of such lanes to provide high data transfer rates. The shown embodiment has four data lanes, but other embodiments can have more or fewer data lanes. The interface may also be operable in different modes, where less than all of the available data lanes are used in some modes if, for example, the interface is operating at a lower data rate or to provide back-compatibility with earlier versions of a standard. In the shown example, a high data rate four lane mode could use all of the provided data channels, while a three lane mode can be provided for back compatibility to an earlier version of a standard by disabling one of the lanes. In some embodiments, the video source on the Source Tx 219 side can configure the link to operate at different bit rates using a fixed rate link. The cable assembly can also have a number of control lines for the exchange of control signals over the source/sink link.
[0037] FIG. 3 is a block diagram of an embodiment for a Source TX 319, such as the Source TX 1 19 video source 1 10 introduced in FIG. 1. The Source Tx 319 includes a converter 345 configured to convert the output of the preceding parallel processing section 343 to multiple lanes and transmit the signal to drive a cable assembly whose plug would attach at the socket or receptacle 323. The Source Tx 319 receives the baseband digital video/audio signal form from the signal provider 1 11 (after video decompressed, if needed, at video decompression block 112) at the serial to parallel converter 341. To handling higher data rates, parallel processing can be used, with the data stream converted into a parallel format in the serial to parallel converter 341 before being provided to the parallel processing block 343 where parallel streams of video data are constructed and encrypted according to standard of the interface for transfer over the cable assembly from the socket 323.
[0038] Although the video data has been decompressed at video decompression block 1 12 and the video data will be transferred from the video source 1 10 to a video sink with the active video portions of its video data either uncompressed or compressed, to further increase the transfer utilization, the portion of the video data other than the active video can be compressed in a process of Repeat Count (RC) encoding, as described further below. The RC encoding, or other encoding applied to the processed video data prior to being transmitted from the Source Tx 319, can performed as part of the parallel processing in block 343. (In embodiments where the signal video data is to have compression applied to its active video portions, such as DSC compression, this can also be performed as part of the parallel processing block 343 operations.) The various components of the video source 1 10 illustrated in FIG. 1 and Source Tx 1 19/319 in FIGs. 1 and 3 can be implemented in hardware, firmware, software, or various combinations of these; and although represented as separate blocks in FIGs. 1 and 3, an actual implementation may combine various elements. For example, some elements, such as the video decompression block 1 12, can be implemented as an application specific integrated circuit (ASIC), with others, such parallel processing block 343 and serial to parallel conversion 341 or converter 345, implemented together through software on more general processing circuitry.
[0039] As mentioned in the preceding description of Source Tx 319, to improve the data transfer efficiency, portions of the video data other than the active video portion can be compressed in the parallel procession of block 343. A stream of video data is made up of a series of frames, where each frame includes an active video portion, corresponding to the video data that will actually be seen on a television or other display, and additional portions. FIG. 4 is a schematic illustration a structure of a frame of video data.
[0040] When a video image is displayed on a television or other display, the image is formed of a series of rows of pixels presented in a sequence of frames. In FIG.4, these pixels of active video 401 are represented by the lines with the diagonal hashing. Additionally, a preceding portion of each of these lines of active data pixels and a preceding number of lines are“blanking intervals”, which is a portion of the frame not typically displayed. Theses blanking intervals are formed of a number of “blank pixels”. The term“pixel” is sometimes used to refer to only the“active pixels” of the active video portion that is displayed, but as used here unless further qualified “pixel” can refer to either an“active pixel” of the active video portion or a“blank pixel” from a blanking interval. (In particular, although more generally applicable, the following discussion is primarily focused on the blank pixels of the blanking intervals.) The origin and much of the terminology related to these blanking intervals is historical, from when televisions used cathode ray tubes that were illuminated by moving beams of electrons very quickly across the screen. Once the beam reached the edge of the screen, the beam was switched off and the deflection circuit voltages (or currents) are returned to the values they had for the other edge of the screen. This would have the effect of retracing the screen in the opposite direction, so the beam was turned off during this time and this part of the frame’s pixel is the “horizontal blanking interval” of each line that precedes the portion of active video. At the end of the final line of active video in a frame, the deflection circuits would need to return from the bottom of the screen to the top, corresponding the“vertical blanking interval” of the first several lines of a frame that contain no active video. Although a modern digital display does not require the time for the deflection circuit to return from one side of a screen to the other, the blanking intervals, originally retained for back-compatibility, have been maintained for additional data, such as sub-titles or closed-caption display data, and control data.
[0041] More specifically, FIG. 4 depicts a single frame of video, such as is transmitted from the Source Tx 319 over an HDMI interface. A single frame is transmitted typically in 1/60 second. The frame of FIG. 4 shows 2 basic time periods within the frame, corresponding to the active video and blanking intervals. The active period typically uses about 85% of the frame’s content, but FIG. 4 is drawn to focus on the blanking periods and is not drawn in proportion to the frame time.
[0042] For the example of FIG. 4, within the lines of (blank) pixels of the blanking periods or intervals, there are white periods (e.g., 409) and black periods (e.g., 407). The white portions are control periods and the black periods are for auxiliary data, such as audio data or control data. FIG. 4 also illustrates the horizontal synchronization pulse, or Hsync, that separates the horizontal lines, or scan lines, of a frame. The horizontal sync signal is a single short pulse which indicates the start of every line, after which follows the rest of the scan line. The vertical synchronization pulse, or Vsync, is also shown and is used to indicate the beginning of a frame or, in embodiment where a frame is made up of alternating fields, to separate the fields. The vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulse occupies the whole line intervals of a number of lines at the beginning and/or end of a scan when there is no active video.
[0043] Much of the content in the control periods of the blanking intervals (the white portions) is repetitive or contains no content. In order to increase the transfer efficiency across the cable assembly from video source to a video sink, these control periods can be compressed or encoded, where this encoding or compression can be referred to as control period Repeat Count, or RC encoding or compression. This RC encoding is a type of“run length” encoding differs from what is commonly meant by“video compression”, which usually refers to the compression of the active video part of the video frame, rather than the blanking periods, and examples of which are MPEG or DSC type compression. For that reason, although RC encoding is a form of compression applied to the blank pixels, in the present discussion this will be referred to primarily as“RC encoding”, rather than“RC compression”, although this latter terminology may also be used elsewhere. More generally, although the discussion here is presented in the context of RC encoding, it can also be applied to other encodings for video processing operations where it can be useful to treat contiguous groups of pixels (both blank and active pixels) grouped for parallel processing operations. [0044] To indicate whether portions of the blanking intervals are compressed through RC encoding, and the degree to which they are compressed, in some embodiments a Repeat Count indicator can be used. For example, the control characters can be compressed up to 8x using a 3-bit Repeat Count Coding, RC(0, 1 , 2), to indicate the number of times identical control characters repeat in a run. If set to 0, the Control character was sent once but did not repeat in the current“run”; if set to 1 , the Control character was sent once and repeated once (RC=1 ) for a total of two times in the current“run”, and so on. This encoding can be performed in the parallel processing block 343 of Source Tx 319. This information can be included in the packets when the video data is formed up into packets for transmission over cable assembly to the video sink, where the RC encoded portions can be expanded back out be repeating an RC compressed portion the specified number of times.
[0045] Although RC encoding of the control periods of the blanking intervals can increase the data transfer efficiency without compression of the active video, in some embodiments the blanking intervals can include portions that are designated not to be compressed, which are represented in black (such as 407), and can be used to support auxiliary data transmission. One particular example of a portion of the blanking interval that is not to be RC compressed is when High-bandwidth Digital Content Protection, or HDCP, is incorporated into HDMI, where the blanking interval includes a designated potion, or pixel location, beginning at a specified pixel number and of a specified number of pixels in length. In FIG. 4, a designated portion of the blanking interval that is designated to be uncompressed to support HDCP (data encryption) processing is shown at 403, with adjacent pixels added in from before and after transitions portions 405 that are not designated to be compressed but that may or may not be compressed. More specifically, the following uses an example of an HDCP 1.4 or 2.2 and HDMI 2.1 embodiment for the following discussion, but the techniques can more generally be applied to other embodiments where the blanking interval or other portion of the video signal is subject to encoding, but includes portions that are not to be encoded.
[0046] Both HDCP 1.4 and HDCP 2.2 define a Keep-Out period, for total 142 pixel clock cycles, at pixel location from Pixel 508 to Pixel 650, referenced to the Vsync signal’s leading edge. The HDMI 2.1 Specification specifies that a video source shall not use RC encoding in the Keep-Out region, regardless of whether or not HDCP is supported. The HDMI 2.1 Specification also specifies that for all other control periods, video sources shall utilize and maximize RC encoding when a particular set of blanking interval data (including Hsync and Vsync) all remain unchanged for 2 or more periods. FIG. 5 illustrates this situation.
[0047] FIG. 5 illustrates an example of the placement of a designated portion with the blanking interval. Along the top of FIG. 5 is the pixel clock signal (pixel_clk), with the Vsync signal shown in the second line. The Vsync is high during the Vsync period and is otherwise low. The pixel count for the pixel clock signal is aligned to start at Pixel 0, taken as the leading edge of the Vsync signal. The HDCP keep out period corresponds to the designated portion in this example and is illustrated in the third line, beginning at pixel location Pixel 508 (relative to the leading edge of the Vsync signal) when the signal goes high and extending for 142 pixel counts to Pixel 650 when the signal goes low. These pixel locations are specified values, where the keep out period is designated to not be compressed to provide time that may be needed for HDCP related operations on the video source and sink. The bottom line illustrates the RC encoding requirements for the HDMI 2.1 Specification, where the portions other than the HDCP Keep-Out period are to be RC compressed, while the RC encoding is off for the HDCP Keep-Out period.
[0048] Referring back to the video source 1 10 of FIG. 1 , the signal waveforms of FIG. 5 are from a frame of the baseband digital video/audio data after it has undergone video decompression in block 1 12, but prior RC encoding in Source Tx 1 19. To increase the rate at which data can be processed, parallel processing can be used, which requires that the decompressed video data stream to be converted to a parallel format. However, the structure of the designated period of FIG. 5 can lead to complications when converting the data stream into a parallel format, as can be illustrated with respect to FIG. 6 and FIG. 7.
[0049] Parallel processing, or“pipelining”, is common practice for high speed digital hardware designs. The rate at which pixel and blanking data is to be processed in an HDMI 2.1 video source may be higher than what some common semiconductor processes can support. For example, in HDMI 2.1 the bandwidth requirement is increased to 48Gbps, which is a 3x increased compared with HDMI 2.0, so that a parallel design would be used if the rate of digital logic is to be maintained or to be effectively reduced. Consequently, to be able to process the stream of video data at a sufficient rate, the embodiment of a Source Tx 319 illustrated in FIG. 3 using parallel processing at block 343.
[0050] FIG. 6 schematically compares single pipe (i.e., non-parallel) processing, 2x parallel processing, and 4x parallel processing. At top, for a non-parallel single pipe stream of video data, is the basic data pixel clock used for processing, below which is a stream of pixels, where 8 clock cycles corresponds to 8 pixels of data (e.g., pO to p7). In the corresponding 2x pipe, the pixel clock signal (clock_div2) is at half the rate of single pipe pixel clock, but the same data rate is maintained by breaking up video stream into pair of pixels, with a set of two pixels being processed in each clock cycle. Consequently, over a set of 8 clock_div2 cycles, 16 pixels of data (e.g., pO to p15) can be processed. Similarly, in the 4x pipeline the pixel clock signal clock_div4 runs at quarter speed relative to the single pipe, the stream of pixels is broken up sets of four, and 16 pixels of data (e.g., pO to p15) can be processed in 4 clock_div4 cycles.
[0051] To convert the stream of video data into a parallel format, the pixels are grouped into sets of contiguous pixels of a size corresponding to the degree of parallelism (i.e., number of pipelines) used. However, this can lead to processing complications if the stream of pixels includes a structure whose boundaries do not align with the sub-division into the sets used for parallel processing. For example, the HDCP Keep-Out period has a specified duration of 142 pixels for the designated portion that is not to have RC encoding. However, 142-cycle of no encoding does not work well with parallel pipelining, because 142 is a difficult number for parallelism as it is not a multiple of 4, 6, and 8, which are typical numbers used for parallelism. Because of this, the special handling of the 142 period in a parallel design can lead to a complexity increase for HDMI sources.
[0052] FIG. 7 illustrates the non-alignment of the portion designated for no RC encoding with the sub-division of the blanking interval for 4x parallel processing. In FIG. 7, the top three lines (Pixel Clock, Vsync, HDCP Keep-Out) correspond to top three lines of FIG. 5 and the bottom line (HDMI 2.1 Requirements) corresponds to the bottom line of FIG. 5. Below the HDCP Keep-Out waveform is video blanking data, numbered starting from 0 at the leading edge of the Vsync signal. Below this, the video blanking data is grouped in 4-data-pack sets for a 4x parallel pipeline, where, in this example Pixel 0 has phase 0— that is, the grouping is aligned at the leading edge of the Vsync signal so that the first group is for pixels pO, p1 , p2, and p3. This grouping continues where, since 508 is divisible by four, pixels p508, p509, p510 and p51 1 are grouped together. Consequently, the grouping aligns with the RC encoding ON, RC encoding OFF transition.
[0053] The designated portion ends at pixel p649, so that pixels p648 and p649 are in the RC encoding OFF portion, while the next two pixels of p650 and p651 on in the RC encoding ON portion. Flowever, the grouping of pixels for 4x parallel processing results in these pixels all being formed into the same 4-data group, so that a single 4-data group of pixels straddles the RC encoding OFF to ON transition. Consequently, within this last 4-data group, there is a predictable encoding OFF to ON change. To disable this RC encoding for the exact 142 cycles introduces complexity at this boundary as one part of the 4-data group is to be compressed while the other part is not. This complexity increase is required for 4/6/8/12/16 pixel pipeline because 142 is not a multiple of any of these numbers. Even a 2-pixel pipeline may be required to increase complexity, since, unlike the example of FIG. 7, that start of the designated portion may not align with the grouping of data for parallel processing.
[0054] To improve upon this situation, the following embodiments introduce a “soft” switching for RC encoding ON/OFF near the boundary of FIDCP Keep-Out area, instead of the “hard” switching described above that can result in implementation complexity. A transition period of several pixel clock cycles can be added to the transition period immediately before the HDCP Keep-Out period, immediately after the HDCP Keep-out period, or both. For example, by being able to selectively add up to 8 immediately preceding pixels and up to 8 immediately following pixels to the designated RC encoding off region, both ends of the designated portion can be aligned in data groupings for up to 8x parallel processing. More generally, a number of pixels (such as up to N) preceding and following adjacent pixels can be selectively reassigned to the designation region for up to N x parallel processing. During this transition period, the video source may configure RC=0 (RC encoding off) for a selected number of these cycles. Within this transition period, conceptually, the area size for RC=0 is extended from 142 cycles to a bigger number that is a multiple of the number of pipelines, so that RC encoding OFF/ON and On/OFF switching can be aligned with the boundaries of the grouping of pixels for the pipelines. Within the same pixel group around HDCP Keep-Out area, the RC encoding process remains uniform for each data grouping. This process affects only the video source during the conversion of the video data to a parallel format and has no impact on the video sink device. The expansion or augmentation of the designated area can be illustrated with respect to FIG. 8.
[0055] Although the description here is given in the context of the HDMI standard when the designation portion of a specific length (142 pixels), is for a particular purpose (HDCP processing), and requires special treatment in that it is not to be RC compressed, the described techniques apply more generally. When a stream of data that is to be converted into a parallel format and includes a designation portion that collectively requires special treatment, a similar arrangement can be used to extend the end, the beginning, or both of the designated region so that it aligns with the grouping of data for the conversion to parallel format
[0056] FIG. 8 illustrates the incorporation of a transition period of adjacent to the designated portion that can be used to augment the designated portion. The upper part of FIG. 8 repeats elements of FIG. 7 and again illustrates a 4x parallel design with 142 periods of RC=0, from Pixel 508 to Pixel 650. At the beginning of FIDCP Keep-Out, for Pixel 508 located at Phase 0 (i.e., aligned with the leading edge of the Vsync waveform) for a 4-pixel group (Pixels 508, 509, 510, and 51 1 ), RC encoding ON/OFF switching happens to be aligned with 4x group boundary. As discussed above, since the pixel 508 aligns with the RC encoding transition, there is no issue with the pixel grouping for parallel processing. For the end of the designated FIDCP Keep-Out region at Pixel 650, however, there is an RC OFF/ON switch, which happens inside a 4-pixel group (Pixel 648, 649, 650, and 651 ) which, for parallel design requires special handling to hard switch RC encoding inside a pixel group.
[0057] To avoid the special handling and its associated complexity increase due to the RC OFF/ON switch within a pixel grouping, a transition period of adjacent pixels that can be reassigned to the the RC encoding OFF region from adjacent pixrels after and/or before the RC encoding OFF region, as illustrated in th“Update” line. In the Update line, in addition to the RC encoding ON and RC encoding OFF regions, on either side of the RC encoding OFF region is an RC encoding OFF (RC=0) OK region that the video source can use to selectively extend the designated portion by up to Delta pixels. This allows for Source Tx 319 to extend the desiganted RC encoding OFF from 142 pixel clock cycles up 142 + 2*Delta pixel clock cycles. In the shown example 4-pipe example, where the RC OFF/ON transition occurred at Pixel 650, the designated RC encoding OFF region is extended by two pixel clocks so that the transition now falls on Pixel 652, which is a multiple of four.
[0058] This transition period before and after the designated area will conceptually extend the size of RC Compession OFF region from 142 periods to a multiple of the number of pipelines. In the example of FIG. 8, with 2 clock extension near the rear end, RC encoding remains off for 144 periods. As the result, there is no change for RC ON/OFF transition near the front end. For RC OFF/ON transition at the back end, it is now deferred by 2 periods and aligned with the 4x pipeline boundary.
[0059] In the example of FIG. 8, the forming of pixels into groups of, in this example, four pixels for parallel processing was aligned with the leading edge for the Vsync signal, having Phase 0, so that the first group of 4 pixels, or 4-data-pack, was formed from pixels 0, 1 , 2, and 3. More generally, however, the stream of video data being compressed will not be in phase so that the grouping of the pixels does not align with the leading edge of the Vsync signal, but is off by some number of pixels. FIG. 9 looks at an example of this more generalized situation.
[0060] FIG. 9 is a more generalized illustration to show the use of the transition period for an arbitrary condition when the beginning of the designated portion does not align with the grouping of pixels. The top three lines of FIG. 9, above the first horizontal broken line, repeat the pixel clock, Vsync, and FIDCP Keep-Out lines as in FIGs. 7 and 8. Below is again the video blanking data, starting with Pixel 0 aligned with the leading edge of the Vsync signal. Unlike FIG. 8, the grouping of pixels into sets of four is not aligned with the leading edge of the Vsync signal, but instead is at phase 3 so that the grouping aligns the three pixels prior to the Vsync leading edge at pixel, or Pixel -3 in this example. In FIG. 9, this off-set is represented by the vertical broken lines, such as shown at clock cycle 1. In this situation, Pixel 0 is now grouped with Pixels -3, -2, and -1. This grouping continues on through the stream of video blanking data, where some of the grouping boundaries around the RC encoding ON/OFF and RC encoding OFF/ON transitions are also represented by vertical broken lines.
[0061] Below the video blanking data line is the forming of the video blanking data into 4 pixel grouping with Pixel 0 at phase 3 without the transition period (“Before Update”), followed by the forming of the video blanking data into 4 pixel grouping with Pixel 0 at phase 3 with the transition period (“After Update”); and then, below the bottom horizontal broken line labelled (at right)“Map to Single Pipe”, is the RC encoding status without the RC=0 OK transition (the“Before” line) followed by the RC encoding status with the extended transition period (“After”) at the last line of FIG. 9.
[0062] Considering first the “Before” line for RC encoding, as discussed with respect to FIG. 7, this requires RC encoding OFF for the 142 pixel clock cycles from Pixel 508 to Pixel 649. Considering the corresponding grouping of pixels for a 4x parallel design, as shown in the“Before Update” grouping directly below the video blanking data, as the grouping has Phase 3 Pixel 0 is grouped with Pixels -3, -2, and -1 , rather than being grouped with Pixels 1 , 2, and 3 as in the Phase 0 example in FIG. 8. Similarly, in FIG. 9, Pixel 508 has Phase 3 in a 4-pixel group of Pixels 505, 506, 507, and 508 in a 4x parallel design. At the end of the designated portion, the last pixel of the designated portion is Pixel 649, which now has Phase 0 in another 4- pixel group of Pixels 649, 650, 651 , and 652. Consequenlty, at both the RC encoding ON/OFF transition and the RC encoding OFF/ON transition there is a 4- pixel grouping that inlcudes both pixels that are to be RC compressed and that are not to be RC compressed (indicated by the stippling). By adding a transition period both immediately before and immediately after the designated Keep-Out period, the video source may disable the RC encoding for these 4-pixel groupings with mixed RC encoding status.
[0063] To avoid the immediately preceding and following 4-pixel groups having mixed RC encoding status, the video source can add a 3-cycle transition immediately before and 3-cycle transition immediately following the designated portion of, in this example, FIDCP Keep-Out. This is illustrated in the bottom line (“After”) where, after the designated portion is augemented by the transition period, the extended area of RC=0 is 148. As illustrated in the grouping of pixels in the “After Update” line, the RC encoding status of all of the 4-pixel groupings is now uniform.
[0064] FIG. 10 is a high-level flow diagram that is used to summarize methods for handling specified portions of a steam of video data, such as those designated for no RC encoding, when converting video data to a parallel format. The flow of FIG. 10 illustrates the process of FIGs. 8 and 9 and is described with respect to the embodiment for a video source 1 10 as illustrated in FIG. 1 and a Source Tx 319 as illustrated in FIG. 3.
[0065] Beginning at 1001 , the video source 110 receives a video signal from the signal provider 1 11. This could be a video signal received over a coaxial cable, for example, at signal provider 1 1 1 or read off of media, such as a DVD in the example of a DVD player for the video source device 1 10. The signal can include active video regions and a designated portion requiring special treatment, such as in the blanking interval for the examples above for the incorporation of HDCP processing into HDMI signals. At 1003, if the received signal is in compressed form, such as the active video being compressed in an MPEG format, the video signal is decompressed by the video decompression block 1 12 to provide the stream of uncompressed video data. The baseband video signal is then passed on to the logic of the Source Tx 319 to perform processing as specified, in this example, for an HDMI source in preparation for transmission over an HDMI interface. Within the Source Tx 319, the baseband signal will be organized into N parallel processing paths, with each path clocked at 1/N the rate of the incoming pixel rate.
[0066] More specifically, to convert the stream of video data into a parallel format while still being able to provide the special treatment for the designated portion, the Source Tx 319 can designate a following set of pixels as selectively assignable to the designated portion at 1005 and can designate a preceding set of pixels as selectively assignable to the designated portion at 1007. Both of 1005 and 1007, as well as the following process of 1008 and its components 1009, 101 1 , and 1013 can be performed in the serial to parallel conversion block 341.
[0067] The conversion to a parallel format is performed at 1008. This includes the Source Tx 319 augmenting the designated portion from the following set of adjacent pixels at 1009 and augmenting designated portion from preceding set of adjacent pixels as 1011 , so that the designated portion will align with the grouping of pixels into N-pixel sets or segments, as described with respect to the N=4 embodiments of FIGs. 8 and 9, where N is the degree of parallelism. The grouping of the pixels is then performed at 1013. In FIG. 10, 1005 and 1009, and 1007 and 101 1 , are shown as separate pairs of steps for purposes of discussion, but in an actual implementation different embodiments can combine these into single operations. Additionally, although these elements of FIG. 10 are shown in a particular order, in some embodiments they can be performed in different orders or even concurrently.
[0068] Once the video data has been converted into a parallel format, parallel processing of the signal in parallel processing block 343 is performed at 1015. In the primary example discussed here, where, aside from the designated portion, the blanking intervals are to be RC compressed, this follows at 1017, with the parallel processing block 343 RC encoding the blanking intervals except for designated portions (including augmentation). At 1019, the converter 345 can then transmit the video signal with RC compressed blanking intervals over a cable structure from the socket 323 to a video sink.
[0069] According to the techniques described above, a transition period for the HDMI HDCP Keep-Out or other designated portion of adjacent pixels of a video signal can be reassigned before and after the designated portion for soft and flexible switching of RC encoding status. By adding a transition period of a number of pixels (such as up to N pixels) on either end of the designated portion, this can reduce the complexity for parallelism up to N pipelines. The updates involved in these techniques can be limited to the video source and have no impact to a video sink.
[0070] In the described embodiments for use in an HDMI standard, the source transmitter on a source device (such as Source TX 1 19/319 on video source 110) can process the baseband video signal using the parallel processing structure, in accordance with the rules of the HDMI specification. For the blanking portion of the baseband video signal, the source transmitter can determine the portion of the video signal designated as an HDCP-keep-out area, and the portions that are Auxiliary Data periods. The source transmitter can then enable blanking, or RC encoding for the portions of the blanking interval that are not designated Auxiliary Data periods and are not designated HDCP keep-out-period. Within these designated periods, the source transmitter can apply the exception presented above: disable blanking encoding at a point in the pixel stream of some number of pixel periods prior to the start of the HDCP-keep-out area, and which aligns with the parallel processing structure, such that all N pixels within the N processing paths have blanking encoding disabled; and disable blanking encoding until a point in the pixel stream a number of pixel periods after the end of the HDCP-keep-out area, and which aligns with the parallel processing structure, such that all N pixels within the N processing paths have blanking encoding disabled.
[0071] It is understood that the present subject matter may be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this subject matter will be thorough and complete and will fully convey the disclosure to those skilled in the art. Indeed, the subject matter is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the subject matter as defined by the appended claims. Furthermore, in the following detailed description of the present subject matter, numerous specific details are set forth in order to provide a thorough understanding of the present subject matter. However, it will be clear to those of ordinary skill in the art that the present subject matter may be practiced without such specific details. [0072] Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable instruction execution apparatus, create a mechanism for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0073] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.
[0074] The disclosure has been described in conjunction with various embodiments. However, other variations and modifications to the disclosed embodiments can be understood and effected from a study of the drawings, the disclosure, and the appended claims, and such variations and modifications are to be interpreted as being encompassed by the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article“a” or“an” does not exclude a plurality.
[0075] For purposes of this document, it should be noted that the dimensions of the various features depicted in the figures may not necessarily be drawn to scale.
[0076] For purposes of this document, reference in the specification to “an embodiment,”“one embodiment,”“some embodiments,” or“another embodiment” may be used to describe different embodiments or the same embodiment.
[0077] For purposes of this document, a connection may be a direct connection or an indirect connection (e.g., via one or more other parts). In some cases, when an element is referred to as being connected or coupled to another element, the element may be directly connected to the other element or indirectly connected to the other element via intervening elements. When an element is referred to as being directly connected to another element, then there are no intervening elements between the element and the other element. Two devices are“in communication” if they are directly or indirectly connected so that they can communicate electronic signals between them.
[0078] For purposes of this document, the term “based on” may be read as “based at least in part on.”
[0079] For purposes of this document, without additional context, use of numerical terms such as a“first” object, a“second” object, and a“third” object may not imply an ordering of objects, but may instead be used for identification purposes to identify different objects. [0080] The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the subject matter claimed herein to the precise form(s) disclosed. Many modifications and variations are possible in light of the above teachings. The described embodiments were chosen in order to best explain the principles of the disclosed technology and its practical application to thereby enable others skilled in the art to best utilize the technology in various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope be defined by the claims appended hereto.
[0081] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims

CLAIMS What is claimed is:
1. A video source, comprising:
a signal provider configured to provide a digital video signal including a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a designated portion, the designated portion beginning at a specified pixel location and of a specified number of pixels in length; and
a source transmitter comprising:
a serial to parallel converter configured to:
receive the video signal from the signal provider,
perform a grouping of the pixels of the video signal into N pixel segments of adjacent pixels, where N is an integer greater than one, to convert the video signal into a parallel format,
augment the designated portion by one or more pixels of the video signal adjacent to the designated portion to align an end of the designated portion with the grouping of the pixels when converted to parallel format, and
provide the video signal in the parallel format;
a parallel processor configured to process the video signal in parallel format, including encoding portions of the first blanking interval of the processed video signal other than the augmented designated portion; and a converter configured to provide the compressed, processed video signal to a video sink device.
2. The video source of claim 1 , wherein the one or more pixels of the video signal adjacent to the designated portion includes pixels both following and preceding the designated portion.
3. The video source of any of claims 1 and 2, wherein each of the frames include an active video portion and the video signal provided by the signal provider includes the active video portions in a compressed form, the video source further comprising:
a video decompressor configured to decompress the video portion of the provided video signal and provide a resultant signal to the serial to parallel converter.
4. The video source of any of claims 1 -3, wherein the active video portions of the video signal provided by the signal provider are compressed in a Moving Picture Experts Group (MPEG) format.
5. The video source of any of claims 1 -4, wherein:
the serial to parallel converter receives the video signal from the signal provider at a first clock rate and provides the video signal in the parallel format at a clock rate at 1/N the first clock rate.
6. The video source of any of claims 1 -5, wherein the converter is configured to provide the compressed, processed video signal in an HDMI (High-Definition Multimedia Interface) format.
7. The video source of any of claims 1 -6, where the converter is configured to provide the compressed, processed video signal asynchronously.
8. The video source of any of claims 1 -7, wherein the designated portion is for copy protection processing.
9. The video source of any of claims 1 -8, wherein the designated portion is for High-bandwidth Digital Content Protection (HDCP) processing.
10. The video source of any of claims 1 -9, wherein the designated portion is part of a vertical synchronization period.
1 1. The video source of any of claims 1 -10, wherein each of the frames include an active video portion and the parallel processor configured to compress the active video portion of each of the frames.
12. The video source of any of claims 1 -1 1 , wherein the active video portions of frames are compressed according to a DSC (Display Stream Compression) algorithm.
13. A method of processing a video signal, comprising:
receiving a video signal including a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, the first blanking interval including a corresponding designated portion beginning at a specified pixel location and of a specified number of pixels in length;
designating a first set of one or more contiguous pixels adjacent to the designated portion of the video signal as being selectively assignable to the designated portion;
converting the video signal into a parallel format of N parallel data streams, where N is an integer greater than one, including:
grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and
augmenting the designated portion of the corresponding first blanking interval by one or more pixels from the first set of contiguous pixels adjacent to the designated portion to align an end of the designated portion with the grouping of the pixels.
14. The method of claim 13, wherein the first set of one or more contiguous pixels adjacent to the designated portion includes pixels both following and preceding the designated portion.
15. The method of any of claims 13 and 14, wherein: the video signal is received a first clock rate and the N parallel date streams are provided in the parallel format at a clock rate of 1/N the first clock rate.
16. The method of any of claims 13-15, wherein the designated portion is for copy protection processing.
17. The method of any of claims 13-16, wherein the designated portion is for
High-bandwidth Digital Content Protection (HDCP) processing.
18. The method of any of claims 13-17, wherein the designated portion is part of a vertical synchronization period.
19. The method of any of claims 13-18, further comprising:
parallel processing the N parallel data streams;
encoding portions of the first blanking interval of the processed video signal other than the augmented designated portion; and
transmitting the compressed, processed video signal to a video sink device.
20. The method of any of claims 13-19, further comprising:
transmitting the compressed, processed video signal is transmitted in an
HDMI (High-Definition Multimedia Interface) format.
21. The method of any of claims 13-20, further comprising:
transmitting the compressed, processed video signal asynchronously.
22. The method of any of claims 13-21 , wherein each of the frames include an active video portion and the received video signal includes the active video portions in a compressed form, the method further comprising:
prior to converting the video signal into a parallel format, decompressing the video portion of the received video signal.
23. The method of any of claims 13-22, wherein the active video portions of the received video signal are compressed in a Moving Picture Experts Group (MPEG) format.
24. The method of any of claims 13-23, wherein each of the frames include an active video portion, the method further comprising:
compressing the active video portion of each of the frames.
25. The method of any of claims 13-24, wherein the active video portions of frames are compressed according to a DSC (Display Stream Compression) algorithm.
26. A video processing circuit, comprising:
a serial to parallel converter configured to:
receive a video signal having a first clock rate, the video signal including a designated portion of a specified number of contiguous pixels; convert the video signal into N parallel data streams at a clock rate of 1/N the first clock rate, where N is an integer greater than one, by grouping of the pixels of the video signal into N pixel segments of adjacent pixels; and prior to grouping the pixels, augment the designated portion by one or more pixels of the video signal adjacent to the designated portion to align an end of the designated portion with the grouping of the pixels; and
a parallel processing circuit configured to receive and concurrently process the N parallel data streams at a clock rate of 1/N the first clock rate.
27. The video processing circuit of claim 26, wherein the one or more pixels of the video signal adjacent to the designated portion includes pixels both preceding and following the designated portion.
28. The video processing circuit of any of claims 26 and 27, wherein the designated portion is part of a vertical synchronization period.
29. The video processing circuit of any of claims 26-28, wherein the designated portion is for copy protection processing.
30. The video processing circuit of any of claims 26-29, wherein the designated portion is for High-bandwidth Digital Content Protection (HDCP) processing.
31. The video processing circuit of any of claims 26-30, wherein the video signal comprises a plurality of frames, each of the frames include an active video portion and the video signal received at the video processing circuit includes the active video portions in a compressed form, the video procession circuit further comprising:
a video decompressor configured to decompress the video portion of the received video signal and provide a resultant signal to the serial to parallel converter.
32. The video processing circuit of any of claims 26-31 , wherein the active video portions of the video signal received at the video processing circuit are compressed in a Moving Picture Experts Group (MPEG) format.
33. The video processing circuit of any of claims 26-32, wherein the video signal comprises a plurality of frames, each of the frames include an active video portion and the parallel processing circuit is configured to compress the active video portion of each of the frames.
34. The video processing circuit of any of claims 26-33, wherein the active video portions of frames are compressed according to a DSC (Display Stream Compression) algorithm.
35. The video processing circuit of any of claims 26-34, where the video signal includes a plurality of frames of multiple pixels, each of the frames having a first blanking interval of one or more pixels and an active video portion of one or more pixels, and wherein designated portion is part of the first blanking interval.
PCT/CN2019/125429 2019-02-27 2019-12-14 Parallel processing pipeline considerations for video data with portions designated for special treatment WO2020173183A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962811034P 2019-02-27 2019-02-27
US62/811,034 2019-02-27

Publications (1)

Publication Number Publication Date
WO2020173183A1 true WO2020173183A1 (en) 2020-09-03

Family

ID=72238363

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/125429 WO2020173183A1 (en) 2019-02-27 2019-12-14 Parallel processing pipeline considerations for video data with portions designated for special treatment

Country Status (1)

Country Link
WO (1) WO2020173183A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705795A (en) * 2021-09-16 2021-11-26 深圳思谋信息科技有限公司 Convolution processing method and device, convolution neural network accelerator and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1122553A (en) * 1993-11-04 1996-05-15 德克萨斯仪器股份有限公司 Video data formatter for a digital television system
CN1573679A (en) * 2003-05-01 2005-02-02 创世纪微芯片公司 Enumeration method for the link clock rate and the pixel/audio clock rate
US20080031450A1 (en) * 2006-08-03 2008-02-07 Shigeyuki Yamashita Signal processor and signal processing method
US20080030614A1 (en) * 1997-04-07 2008-02-07 Schwab Barry H Integrated multi-format audio/video production system
CN107087132A (en) * 2017-04-10 2017-08-22 青岛海信电器股份有限公司 Receiver and method for transmitting signals

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1122553A (en) * 1993-11-04 1996-05-15 德克萨斯仪器股份有限公司 Video data formatter for a digital television system
US20080030614A1 (en) * 1997-04-07 2008-02-07 Schwab Barry H Integrated multi-format audio/video production system
CN1573679A (en) * 2003-05-01 2005-02-02 创世纪微芯片公司 Enumeration method for the link clock rate and the pixel/audio clock rate
US20080031450A1 (en) * 2006-08-03 2008-02-07 Shigeyuki Yamashita Signal processor and signal processing method
CN107087132A (en) * 2017-04-10 2017-08-22 青岛海信电器股份有限公司 Receiver and method for transmitting signals

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705795A (en) * 2021-09-16 2021-11-26 深圳思谋信息科技有限公司 Convolution processing method and device, convolution neural network accelerator and storage medium

Similar Documents

Publication Publication Date Title
KR100510208B1 (en) A multiple format video signal processor
TWI523504B (en) Transmission and detection of multi-channel signals in reduced channel format
CN107210026B (en) Pixel pre-processing and encoding
US8355078B2 (en) HDMI transmission systems for delivering image signals and packetized audio and auxiliary data and related HDMI transmission methods
RU2454021C2 (en) Information processing device and method
CN1330181C (en) Method and system for MPEG chroma de-interlacing
US20010043282A1 (en) Multi-function USB video capture chip using bufferless data compression
CN101743745A (en) Multiple format video display
US6954234B2 (en) Digital video data signal processing system and method of processing digital video data signals for display by a DVI-compliant digital video display
CN110166720A (en) Processing system for video and processing chip
US6842177B2 (en) Macroblock padding
US7061540B2 (en) Programmable display timing generator
CN110234010B (en) Video encoding method/apparatus and video decoding method/apparatus
US8363167B2 (en) Image processing apparatus, image processing method, and communication system
WO2020173183A1 (en) Parallel processing pipeline considerations for video data with portions designated for special treatment
WO2013018248A1 (en) Image transmission device, image transmission method, image receiving device, and image receiving method
US7970056B2 (en) Method and/or apparatus for decoding an intra-only MPEG-2 stream composed of two separate fields encoded as a special frame picture
TWI390985B (en) Supersampling of digital video output for multiple analog display formats
JP5739262B2 (en) Image transmission device, image transmission method, image reception device, and image reception method
CN1595972A (en) Format conversion device
CN113439445A (en) Techniques for enabling ultra high definition alliance specified reference mode (UHDA-SRM)
JP2008015565A (en) Circuit, system and method for processing image
Brosz et al. A single-chip HDTV video decoder design
WO1998017057A1 (en) Apparatus and method for generating on-screen-display messages using field doubling
CN116033213A (en) Image signal processing method and device, storage medium and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19916675

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19916675

Country of ref document: EP

Kind code of ref document: A1