The application requires the priority of following application, and the open integral body by reference of each in these applications is incorporated into this: the U.S. Provisional Application No.60/793 that on April 18th, 2006 submitted to, 288, the U.S. Provisional Application No.60/793 that submitted on April 18th, 2006,276, the U.S. Provisional Application No.60/793 that submitted on April 18th, 2006, the U.S. Provisional Application No.60/793 that on April 18th, 277 and 2006 submitted to, 275.
Embodiment
The present invention relates to such method and system, described method and system is used for reducing memory access bandwidth and shared storage and other processing resources at each parts of a plurality of video pipeline levels of one or more channels, so that produce one or more high-quality signals.
Fig. 4 shows television display system in accordance with the principles of the present invention.Television display system shown in Fig. 4 can comprise: television broadcasting signal 202, dual tuner 410, MPEG codec 230, sheet peripheral storage device 240, chip external memory 300, double vision be processor 400, memory interface 530 and at least one external module 270 frequently.Dual tuner 410 can received tv broadcast signal 202 and is produced first vision signal 412 and second vision signal 414.Vision signal 412 and 414 can be provided for dual decoding device 420 then.Dual decoding device 420 is shown in the double vision inside of processor 400 frequently, but also can be in the outside of double vision frequency processor 400.Dual decoding device 420 can be carried out and the similar function of decoder 220 (Fig. 2) first and second vision signals 412 and 414.Dual decoding device 420 can comprise multiplexer 424 and two decoders 422 at least.In replacing layout, multiplexer 424 and one or two decoder 422 can be in dual decoding device 420 outsides.Decoder 422 provides through decoded video signal output 426 and 428.Should be appreciated that decoder 422 can be any NTSC/PAL/SECAM decoder that is different from mpeg decoder.Input to decoder 422 can be digital CVBS, S-video or component video signal, and the output of decoder 422 can be the digital standard clear signal such as the Y-Cb-Cr data.Provide more detailed discussion in conjunction with Fig. 7,8,9 and 10 to dual decoding device 420.
Multiplexer 424 can be used for selecting at least one or the incoming video signal of arbitrary number in two vision signals 412 and 414.This at least one selected vision signal 425 is provided for decoder 422 then.This at least one selected vision signal 425 is shown single video signal in the drawings in order to avoid diagram is too crowded, but should be appreciated that vision signal 425 can represent the vision signal of the arbitrary number of the input that can be provided to an arbitrary number decoder 422.For example, multiplexer 424 can receive 5 incoming video signals, and two in these 5 incoming video signals can be offered two different decoders 422.
Concrete vision signal shown in Fig. 4 is handled and is arranged that the inside dual decoding device 420 that can make on the double vision frequency processor 400 is used to reduce the cost that use may be used the outer decoder that needs in (time-shiftingapplication) in time shift.For example, the output 426 of dual decoding device 420 and one of 428 can be provided for 656 encoders 440, before vision signal is interweaved described vision signal suitably is encoded into reference format.656 encoders 440 can be used for dwindling size of data, thereby handle with clock frequency faster.For example, in certain embodiments, 656 encoders 440 can narrow down to 8 bits with data, h-sync and the v-sync signal of 16 bits, thereby handle with the frequency of twice.This can be a standard for the interface between SD video and any NTSC/PAL/SECAM decoder and the mpeg encoder.Encoded vision signal 413 for example is provided for outside MPEG codec 230 via the port on the video processor then, produces the time shift vision signal.Another port, promptly the flexible port 450 on the double vision frequency processor 400 can be used for receiving this time shift vision signal from MPEG codec 230.This handles the complexity that has reduced video processor by the outside at video processor to a plurality of parts of digital video signal.In addition, the time shift of being carried out by MPEG codec 230 may need to comprise compression, decompress and and non-volatile mass storage device between interface be connected interior operation, all these may be outside the scope of video processor.
Also can utilize double vision frequently processor 400 produce other vision signals (for example, cursor, show or can at least one external module 270, be used or otherwise be provided for various other forms of demonstration external module, except that television broadcasting signal 202) at screen.For example, for this reason, double vision processor 400 frequently can comprise graphics port 460 or pattern maker 470.
Can be provided for selector 480 through decoded video signal and various other vision signals, graphic generator 460 or pattern maker 470.Selector 480 is selected at least one in these vision signals, and selected vision signal is offered Video processing parts 490 on the plate.Vision signal 482 and 484 is two illustrative signals that can be offered Video processing parts 490 on the plate by selector 480.
Video processing parts 490 can be carried out any suitable video processing function on the plate, for example, deinterleave, convergent-divergent, frame rate conversion and channel is warm and color management.Any processing resource in the double vision frequency processor 400 can send data and receive data from chip external memory 300 to chip external memory 300 (it can be the volatile storage devices of SDRAM, RAMBUS or any other type) via memory interface 530.In these functions each will be described in more detail in conjunction with the description of Fig. 5.
At last, the one or more video output signals 492 of double vision processor 400 outputs frequently.Video output signals 492 can be provided for one or more external modules 270 in order to show, to store, further to handle or any other suitable purposes.For example, a video output signals 492 can be a main output signal of supporting high definition TV (HDTV) resolution, and second video output signals 492 can be the auxiliary output of supporting SD TV (SDTV) resolution.Main output signal can be used for driving high-end external module 270, for example, digital TV or projecting apparatus, auxiliary simultaneously output is used for SD (DVD) video tape recorder, SD TV (SDTV), the SD preview shows or other suitable Video Applications arbitrarily.Like this, auxiliary output signal can make the user (for example, DVD) go up record HDTV program, to allow the user to watch this program simultaneously again on the HDTV display at the SDTV of any appropriate medium.
Fig. 5 illustrates in greater detail the double vision function of Video processing parts 490 on the plate of processor 400 frequently.Video processing parts 490 can comprise input signal configuration 510, memory interface 530, configuration interface 520, front end pipeline unit 540, frame rate conversion (FRC) and warm pipeline unit 560 of convergent-divergent pipeline unit 550, color treatments and channel and backend pipeline parts 570 on the plate.
Configuration interface 520 can receive control information 522 from the external module such as processor via I2C interface for example.Control information 522 can be used for disposing input signal configuration 510, front end 540, frame rate conversion 550, color processor 560, rear end 570 and memory interface 530.Input signal configuration 510 can be coupled to the outside input on the double vision frequency processor 400, so that receive vision signal in the input 502 (for example, HDTV signal,, SDTV signal or other suitable digital video signals arbitrarily) and selected vision signal 482 and 484 (Fig. 4).Input signal configuration 510 can be configured in the received vision signal (for example, signal 482,484 and 502) at least one offered front end 540 as video source 512 then.
Based on this configuration, be provided in these inputs of Video processing parts 490 on the plate each and can utilize on the plate Video processing streamline processed in the different moment.For example, in one embodiment, double vision processor 400 frequently can comprise eight input ports.Example port can comprise that the outside of the graphics port of the HDTV signal port of the HDTV signal port of two 16 bits, 20 bits, the SDTV signal port of three 8 bits (it can be the CCIR656 form), 24 bits and one 16 bit is at the screen display port.
Front end 540 can be configured to select at least one video source 512 (that is, channel) of available input, and along one or more Video processing pipeline stages (one or more) selected video signal flow is handled.Front end 540 can will offer frame rate conversion and scalable stream pipeline stage 550 from (one or more) of one or more pipeline stages treated video signal flow.In certain embodiments, front end 540 can comprise three Video processing pipeline stages, and provides three outputs that separate to FRC with scalable stream pipeline stage 550.In FRC and scalable stream pipeline stage 550, may there be one or more treatment channel.For example, first passage can comprise main scaler and frame rate conversion unit, and second channel can comprise another scaler and frame rate conversion unit, and third channel can comprise the scaler of lower cost.These scaler can be independent of each other.For example, a scaler can be amplified input picture, and another can dwindle this image.Two scaler can both be in 444 pixels (RGB/YUB24-bit) or 422 pixels (YC16-bit) work down.
The warm pipeline stages 560 of color treatments and channel can be configured to provide the color management function.These functions can comprise that color remaps, brightness, contrast, look warm saturation strengthen, γ interweaves and pixel is confirmed.In addition, color treatments and the warm pipeline stages 560 of channel can also provide the warm function of channel, cover different channels or warm or cover two warm video channels with the 3rd channel.
Backend pipeline level 570 can be configured to carry out data formatting, the signature/anti-digital translation of signing, saturation logic, clock delay or in any other suitable final signals operations that may need before the one or more channels of double vision processor 400 outputs frequently.
In each pipeline stages segment each can be configured to utilize memory interface 530 to send data and receive data from chip external memory 300 to chip external memory 300.Memory interface 530 can comprise Memory Controller and memory interface at least.The maximal rate that Memory Controller can be configured to be supported with this memory is moved.In one embodiment, data/address bus can be 32 bits, and can be with the frequency work of 200MHz.This bus can provide very the throughput near the 12.8G bits per second.Use each functional block (that is memory client) of memory interface 530 in the burst mode of operation, to carry out addressing to this memory.Arbitration between each memory client can be in a looping fashion or arbitrarily other suitable arbitration schemes carry out.Will be to the more detailed discussion of each streamline segment in conjunction with Figure 12,19,20,21 and 22 description are provided.
Each assembly in the double vision frequency processor 400 may need different clock generating mechanism or clock frequency with pipeline stages.Fig. 6 shows the clock generation system 600 that for this reason produces a plurality of clock signals.Clock generation system 600 comprises crystal oscillator 610, general-purpose simulation phase-locked loop circuit 620, digital PLL circuit 640a-n and memory analog phase-locked look circuit 630 at least.The output 612 of crystal oscillator 610 can be coupled to general phase-locked loop circuit 620, memory phase-locked loop circuit 630, the double vision assembly of any appropriate of another assembly in the processor 400 or this processor outside frequently as required.
Memory analog phase-locked look circuit 630 can be used for producing memory clock signal 632, and other clock signals of different frequencies 636, clock signal 636 can be selected device 650 clock signal 652 selecting as operational store device (for example, the DDR memory of 200MHz) or another system component.
General-purpose simulation phase-locked loop circuit 620 can produce the 200MHz clock, and this clock can be used as the basic clock of one or more digital phase-locked loops (PLL) circuit 640a-n.Digital PLL circuit 640a-n can be used in the open loop mode, shows as frequency synthesizer (that is, basic clock frequency being multiply by a reasonably numeral) in this pattern.Perhaps, digital PLL circuit 640a-n can be used in the closed loop mode, can realize frequency lock by locking onto each input clock signal 642a-n (for example, audio video synchronization input) in this pattern.Digital PLL has the ability of realization to the precise frequency locking of low-down clock signal in closed loop mode.For example, in field of video processing, vertical video clock signal (for example, v-sync) can be in the scope of 50-60Hz.A plurality of system components can use the output 644a-n of digital PLL circuit 640a-n to be used for requiring the operation of a plurality of open loops or closed signal.Should be appreciated that output each among the 640a-n can provide the clock signal of different frequency or same frequency.
For example, an assembly that may use the clock signal that is produced by digital PLL circuit 640a-n is dual decoding device 420 (Fig. 4), and its operation will be described in more detail in conjunction with Fig. 7,8,9 and 10.Dual decoding device 420 can comprise decoder 422 (Fig. 4).Decoder 422 can be used in the multiple modes of operation, as described in conjunction with Fig. 7,8 and 9.
Fig. 7,8 and 9 shows and utilizes decoder 422 to produce three kinds of exemplary mode of operation of vision signal 426 and 428.These three kinds of operator schemes can provide for example composite video signal, S-vision signal and component video signal.
In these three kinds of patterns first kind can be used for producing and meet vision signal, as shown in Figure 7.First decoder mode can comprise DC recovery unit 720, analog to digital converter 730 and decoder 422, they each can be included in (Fig. 4) in the dual decoding device 420.Can be provided for DC recovery unit 720 by the vision signal 425 (Fig. 4) that dual tuner 410 provides or provided by multiplexer 424 in replacing layout.DC recovery unit 720 can have been lost its DC benchmark and should periodically have been reset so that be used when keeping video properties information such as brightness in the vision signal 425 that can be the AC coupled signal.From the vision signal of DC recovery unit 720 by analog to digital converter 730 and decoder 422 digitlizations.
In first pattern, decoder 422 can use the digitized vision signal 732 from single analog to digital converter to produce composite video signal.Analog to digital converter 730 and decoder 422 can come work by receiving dagital clock signal 644a-n (they can for example be 20,21,22,23,24,25,26,27,28,29 or 30MHz).In addition, decoder 422 can utilize output feedback signal 427 to come the operation of DC recovery unit 720 is controlled.Output feedback signal 427 can for example be the control signal of 2 bits, and this control signal indication DC recovery unit 720 increases or reduce to be provided for the DC output in the vision signal of analog to digital converter 730.
Second kind in these three kinds of patterns can be used for producing the S-vision signal, as shown in Figure 8.Second decoder mode can be included in all elements described in first pattern, also comprises second analog to digital converter 820.Vision signal 425 (Fig. 4) can be divided into first 812 and second portion 810.First 812 in the signal of vision signal 425 (Fig. 4) (it can be provided by multiplexer 424) can be provided for DC recovery unit 720, and the second portion 810 in the signal of vision signal 425 (Fig. 4) can be provided for second analog to digital converter 820.By 730 digitlizations of second analog to digital converter, and be provided for decoder 422 from the first 812 of the vision signal 425 of DC recovery unit 720.In addition, the second portion 810 of vision signal 425 offers decoder 422 by analog to digital converter 820.S-video two-wire analog port is used to be connected to plurality of devices (for example, VCR, DVD player etc.).
In this second pattern, decoder 422 can use the digitized vision signal 732 and 832 from two analog to digital converters 730 and 820 to produce the S vision signal.Analog to digital converter 730 and 820 and decoder 422 can by receive dagital clock signal 644a-n (Fig. 6, they can for example be 21,22,23,24,25,26,27,28,29 or 30MHz) come work.In certain embodiments, the first 812 of vision signal can be the Y passage of vision signal 425, and the second portion 810 of vision signal 425 can be the chrominance channel of vision signal.
In these three kinds of patterns the third can be used for producing component video signal, as shown in Figure 9.Second decoder mode can be included in all elements described in second pattern, also comprises the second and the 3rd DC playback unit 930 and 920 and multiplexer 940.Vision signal 425 (Fig. 4) can be divided into first 914, second portion 910 and third part 912.The first 914 of vision signal 425 (Fig. 4) (it can be provided by multiplexer 424) can be provided for DC recovery unit 720, and the third part 912 that the second portion 910 in the signal of vision signal 425 (Fig. 4) can be provided in the signal of DC recovery unit 930 and vision signal 425 (Fig. 4) can be provided for DC recovery unit 920.Component video signal needs three-way analog port to be used to be connected to plurality of devices (for example, VCR, DVD player etc.).
By analog to digital converter 730 digitlizations, and be provided for decoder 422 from the first 914 of the vision signal 425 of DC recovery unit 720.By mode converter 820 digitlization selectively (for example, utilizing multiplexer 940 to select), and be provided for decoder 422 from second and the third part 910 and 912 of the vision signal 425 of DC recovery unit 930 and 920.Multiplexer 940 can receive the control signal from decoder 422, so as in time second of multiplexed video signal 425 and third part 910 and 912 make by analog to digital converters 820.
In three-mode, in certain embodiments, decoder 422 can use the digitized vision signal 732 and 832 from two analog to digital converters 730,820 to produce composite video signal.Analog to digital converter 730 and 820 and decoder 422 can by receive dagital clock signal 644a-n (Fig. 6, they can for example be 21,22,23,24,25,26,27,28,29 or 30MHz) come work.In addition, decoder 422 can utilize output feedback signal 427 to come the operation of DC recovery unit 720,930 and 920 is controlled.In certain embodiments, first, second of vision signal 425 and third part 914,910 and 912 can be respectively Y passage, U passage and the V passage of vision signal 425.
The DC recovery unit, digital to analog converter and the Video Decoder that should be appreciated that various common available types can be used to carry out aforementioned functional, for the sake of brevity, have omitted their concrete operations in this discusses.
In an embodiment shown in Figure 10, can utilize two decoders 422 and three analog to digital converters 730 or 820 to realize all three kinds of decoder mode.Arrange shown in Figure 10 and can make dual decoding device 420 (Fig. 4) can be provided in any two kinds of corresponding at least two vision signals 426 and 428 (that is, from vision signal of each decoder) in these three kinds of patterns substantially simultaneously.
Figure 10 shows the exemplary implementation of utilizing two decoders to produce two composite video signals, composite video signal and S-vision signal, composite video signal and one-component vision signal or two S-vision signals.Exemplary implementation shown in Figure 10 comprises: one group of multiplexer 1020,1022,1023,1025,1021,1024,1026,1027 and 1028; Three analog to digital converters 730,820,1010; Four DC recovery units 720,721,930,920; Demodulation multiplexer 1040; And two decoder 422a and 422b.
The exemplary implementation of Figure 10 can be worked when being used for producing two composite video signals in the following manner.The first vision signal 425a can be coupled to first input of multiplexer 1020, and second vision signal 914 can be coupled to second input of multiplexer 1024.First input of multiplexer 1020 can be selected and be output to the 4th input of multiplexer 1021, to be input to DC recovery unit 720.Second input of multiplexer 1024 can be selected and be output to DC recovery unit 721.The operation of the remainder of this implementation and the class of operation that combines the generation composite video signal that Fig. 7 describes are seemingly.For example, be similar to analog to digital converter 730 and 1010 DC recovery unit 720 and 721 and decoder 422a and 422b work in a similar fashion and produce composite video signal as shown in Figure 7.
Utilize the exemplary implementation shown in Figure 10 to produce a composite video signal and S-vision signal or composite video signal and one-component vision signal to be performed with two similar modes of composite video signal of above-mentioned generation.For example, first and second video signal portions 812 and 810 that are used to produce the vision signal 425 of S-vision signal are provided for multiplexer 1022 and 1026. Multiplexer 1022 and 1026 output are provided for multiplexer 1021 and 1027, and multiplexer 1021 and 1027 is selected will be by analog to digital converter 730 and 820 vision signals of handling.Similarly, multiplexer 1024 is selected those vision signals that will be handled by analog to digital converter 1010.The table 1 that illustrates has provided the more detailed description of selecting for various operator scheme multiplexer inputs below.
Exemplary implementation shown in Figure 10 also makes it possible to produce two S-vision signals 426 and 428.For this function is provided, (for example, first clock signal 644a 20MHz) is provided for analog to digital converter 730 and decoder 422a to be operated in the first frequency and first phase place.The second clock signal 644b that is operated in second frequency (may there be the phase differences of 180 degree with first clock signal in this second frequency, for example, the 20MHz of phase phasic difference 180 degree) can be provided for analog to digital converter 1010 and decoder 422b.(the 3rd frequency is the twice of the frequency of first clock signal basically, and has the phase place identical with first clock signal, and for example, the 3rd clock signal 644c 40MHz) can be provided for analog to digital converter 820 to be operated in the 3rd frequency.Clock signal 644b is provided for multiplexer 1030 and comes selectively clock signal 644b to be coupled to multiplexer 1026 and 1027.By clock signal being coupled to the selection input of multiplexer 1026 and 1027, can carry out time division multiplexing to the input of the vision signal on the analog to digital converter 820 810a-c.Clock signal 644a is provided for multiplexer 1040, and this time-multiplexed vision signal is carried out demultiplexing.Clearer description to the time division multiplexing operation will provide in conjunction with Figure 11.
Figure 11 shows two second portions 810 that are used for two vision signals 425 and carries out time-multiplexed exemplary timing diagram.By these operations of time division multiplexing, can avoid needing the 4th mode converter, thereby reduce the double vision total cost of processor 400 frequently.Timing diagram shown in Figure 11 comprises: respectively with the output of first, second and the 3rd clock signal 644a, 644b and corresponding three clock signals of 644c, three analog to digital converters 730,1010 and 820.As shown in the figure, clock 1 and clock 2 are operated in half place of the frequency of clock 3, and change with the trailing edge of clock 3.
As shown in the figure, between time period T1 and T4, a complete clock cycle 644a (clock 1) finishes, and analog to digital converter 730 (ADC1) can be used for being handled by decoder 422a with the corresponding output of the 812a-c of first first vision signal (S0).Time period T2 begin locate, on the rising edge of clock 3, analog to digital converter 820 (ADC 3) begins to handle the second portion 810a-c of second vision signal (S1), and finishes processing in ending place of time period T3.
At the place that begins of time period T3, analog to digital converter 820 (ADC 2) begins to handle the 810a-c of first of vision signal S1, and finishes in ending place of time period T6.ADC 2 and the corresponding output of the 810a-c of first vision signal S1 become in ending place of time period T6 and can be used for being handled by decoder 422b.ADC 2 become and to handle by decoder 422b with the end of the corresponding output of the 810a-c of first vision signal S1 at time period T6.Time period T4 begin locate, on the rising edge of clock 3, analog to digital converter 820 (ADC 3) begins out the second portion 810a-c of vision signal S0, the end that does not do time period T5 finishes processing.
Therefore, in the end of time period T6, two parts of two vision signal S0 and S1 only utilize three analog to digital converters to finish processing.
On the rising edge of clock 3 between time period T5 and the T6, demodulation multiplexer offers decoder 644a to produce treated vision signal 426 in the output of the second portion 810a-c of the vision signal S0 of ADC 3 1040 future.Simultaneously, the second portion of vision signal S1 812 is selected being handled by analog to digital converter 320 (ADC 3), and becomes available in the end of time period T7.
The front has illustrated that being used for one utilizes three analog to digital converters 730,1010 and 820 to produce the embodiment of two S-vision signals 426 and 428.Following table 1 has been summed up the exemplary that can be provided for corresponding multiplexer compound to produce (cst), component (cmp) and S-vision signal (svid) and has been selected signal.
Video 1 |
Video 2 |
M0_sel |
M1_sel |
M2_sel |
M3_sel |
M4_sel |
M5_sel | M6_sel |
M7_sel | |
425a(cst) |
425e(cst) |
0,0 |
x,x |
1,1 |
x,x |
x,x |
0,1 |
x,x |
x,x |
425a(cst) |
910,912,914(cmp) |
0,0 |
x,x |
1,1 |
x,x |
x,x |
1,0 |
x, x |
1,429 |
425b(cst) |
812a,810a(svid) |
0,1 |
x,x |
1,1 |
x,x |
0,0 |
0,0 |
0,0 |
0,0 |
812a,810a(svid) |
812b,810b(svid) |
x,x |
0,0 |
0,0 |
x,x |
0,1 |
0,0 |
0,644b |
0,0 |
812a,810a(svid) |
812c,810c(svid) |
x,x |
0,0 |
0,0 |
x,x |
1,0 |
0,0 |
644b,0 |
0,0 |
812b,810b(svid) |
812c,810c(svid) |
x,x |
0,1 |
0,0 |
x,x |
1,0 |
0,0 |
644b,1 |
0,0 |
Table 1
Dual decoding device 420 also can be configured to handle the unsettled analog or digital signal that may receive from video tape recorder (VCR).VCR may since such as F.F., fast or the various patterns the time-out produce unsettled signal.Dual decoding device 420 can be handled these types during this situation signal provides the output signal of good quality.
Unsettled vision signal may cause owing to the unsettled synchronizing signal that VCR produced.A kind of suitable technique that is used to handle unsettled synchronizing signal can be this unsettled vision signal of buffering.For example, first in first out (FIFO) buffering area can be arranged near the output of decoder.At first, can utilize suitable synchronizing signal the decoder dateout to be write this fifo buffer as benchmark.These synchronizing signals and clock can regenerate according to the logical timer in the decoder or create again, and then can be from this fifo buffer reading of data when running into such mode of operation.Therefore, can utilize stable synchronizing signal to export unsettled vision signal.In every other situation and mode of operation, this fifo buffer can be bypassed, and output can be identical with the input of this FIFO.
Perhaps, realize that in chip external memory fifo buffer can realize the suitable processing to unsettled synchronizing signal.For example, when detecting unsettled synchronizing signal, decoder can be placed in the 2-D pattern, thereby uses chip external memory hardly.The major part that is generally used for the chip external memory 300 of 3-D operation becomes available, and can be used for realizing aforementioned fifo buffer (that is, the equivalent of at least one partial data vector can be used as the free storage space).In addition, the fifo buffer of this chip external memory inside can be stored the pixel of a whole frame, so even write and read speed does not match, neither can be repeated also can not be dropped at output place frame.The field system that can still make that repeats or abandon in specific frame or the frame shows quite good picture.
What Figure 12 illustrated in greater detail front end 540 in the video pipeline is linking function.Particularly, channel selector 1212 can be configured to select four channels from a plurality of video source 512.These four channels can be along four pipelining levels in the front end 540 and are processed.In certain embodiments, these four channels can comprise: main video channel, PIP channel, show (OSD) channel and data instrument or testing channel at screen.
Front end 540 can be realized any one the realization various video in these channels is handled level 1220a, a 1220b, 1230 and 1240.In certain embodiments, each channel can be shared the one or more resources from any other grades, to improve the disposal ability of each channel.Some examples of the function that can be provided by Video processing level 1220a and 1220b can comprise noise reduction and deinterleave, noise reduction and deinterleave and can be used to produce best image quality.The noise reduction and the function that deinterleaves also can be shared chip external memory 300, and sort memory is denoted as shared storage level 1260, will it be described in more detail in conjunction with the description of Figure 13 and 15.Too crowded for fear of diagram, shared storage level 1260 is illustrated as and channel 1 a corresponding part of handling level among many Figure 12.But, should be appreciated that one or more shared storage levels 1260 can be the parts of any pipeline stages in the front end 540.
Noise reduction can be removed impulsive noise, Gaussian noise (on the room and time) and MPEG pseudo-shadow, for example block noise and mosquito noise.Can comprise by utilizing the edge adaptivity interpolation in the motion to come any line of losing of interpolation to generate continuous videos thereby deinterleave from interleaved video.Perhaps, the function that deinterleaves can be used based on the time of Motion Adaptive and the combination of space interpolation.The two can be operated in denoiser and deinterleaver in the 3-D territory, and may require the field in the storage frame in chip external memory.Therefore, can serve as can be with the client of the memory interface 530 that visits chip external memory for deinterleaver and denoiser.In some implementations, denoiser and deinterleaver can be shared chip external memory and maximize storage space and deal with data in the mode of full blast, shown in shared storage level 1260.This processing will be in conjunction with the description of Figure 13 and 15 is described in more detail.
Among three Video processing level 1220a, the 1220b and 1230 any can be moved format conversion, vision signal is transformed in the territory of expectation.For example, this conversion can be used to incoming video signal stream is changed over YC4:2:2 form in 601 or 709 color spaces.
Front end 540 can also provide instrument streamline 1240 to come the service data instrument function.Instrument streamline 1240 can be used to for example seek beginning and the end pixel and the line position of motion video, perhaps seeks preferred sampling clock phase when having controllable phase sampler (ADC) upstream.Carry out these operations and have and help detect automatically the inputting channel parameter, for example, resolution, letter become frame and column to become frame.In addition, detect this channel parameter can help to use they come by microcontroller or arbitrarily other suitable treatment elements the feature such as convergent-divergent and length-width ratio conversion is controlled.Front end 540 can also be to all four channel operation synchronous video signal instrument functions, and synchronizing signal is lost, clock signal is lost or the synchronizing signal of super scope or clock signal thereby detect.These functions also can be used for by microcontroller or arbitrarily other suitable treatment elements come driving power management control.
End at front end 540, one group of fifo buffer 1250a-c can sample to video flowing, coming provides through sample video signal 1252,1254 and 1256 between front end 540 and frame rate conversion and convergent-divergent 550 (Fig. 5) pipeline stages, when these sample video signals 1252,1254 and 1256 can be used for selected channel reset.
More detailed description to shared storage level 1260 will provide in conjunction with the description to Figure 13 and 15.Particularly, as shown in figure 13, shared storage level 1260 can comprise the function of denoiser 330 and deinterleaver 340 at least.These two kinds of functions all are temporal functions, may need the frame memory device so that produce high-quality image.Share chip external memory 300 by enabling various memory access module (that is, memory client), can dwindle the size of chip external memory 300 and be used for the required bandwidth of interface brace external memory 300.
Denoiser 330 can be to two input operations that interweave in the 3-D pattern.These two of can operate on it of denoiser 330 can comprise (that is 332 before previous field) before 1,262 two of a playground (live field) 1262 and the playgrounds.Deinterleaver 340 can be operated three fields that interweave in the 3-D pattern.These three fields can comprise 332 before playground 1262, previous field 1330 and this previous field.
As Figure 13 and shown in Figure 14, field buffer district 1310 and 1312 can be shared by denoiser 330 and deinterleaver 340.Denoiser 330 can be from chip external memory 300, promptly read 332 before the previous field from field buffer district 1310, and itself and playground 1262 are handled together, and noise reduction output 322 is provided.Noise reduction output 322 can be written to chip external memory 300, promptly in the write field buffering area 1312.Deinterleaver 340 can promptly read previous field 1330 from field buffer district 1312 from chip external memory 300, and the field that will read handles with playground 1262 or noise reduction output 322, provides the video 1320 that deinterleaves as output then.
For example, as shown in figure 14, playground 1262 (1) can be provided for denoiser 330, in order to very first time section (that is, T1) during output through the output 322 of noise processed.Denoiser 330 finish handle 1 after or before (promptly, during time period T2), noise reduction output 322 (field 1) can be offered deinterleaver 340 by denoiser, perhaps can walk around denoiser 330 and be provided directly to deinterleaver 340 (for example, if do not need noise reduction) via 1262.In arbitrary situation, during second time period (that is, time period T2), noise reduction output 322 (1) can be write field buffer district 1312 in the chip external memory 300 by denoiser 330.
During time period T2, deinterleaver 340 can read the output 1330 (field 1) in field buffer district 1312, the playground in the processed frame (field 2) simultaneously from chip external memory 300.Field buffer district 1312 is provided at the processed before noise reduction output (field 1) of output 322 (field 2) (that is before the playground) through noise processed subsequently.
Denoiser 330 the 3rd time period (that is, finish next that handle in the playground 1262 (2) during T3) after or before, one before the playground 1330 in the field buffer district 1312 can be written into field buffer district 1310.Next one noise reduction output 322 (field 2) can be written into field buffer district 1312, replaces noise reduction output (field 1).During time period T3, the content in field buffer district 1312 is noise reduction output (field 2) (that is, last playground), and the content in field buffer district 1310 is noise reduction output (field 1) (that is before the last playground).
During time period T3, denoiser 330 can be operated one 322 (field 1) before playground 1262 (field 3) and the last playground.During the section T3, deinterleaver 340 can be operated some fields at one time: playground 1262 (field 3) or noise reduction are exported a playground 1330 (field 2) and the last playground playground 332 (field 2) before before (field 3), this playground.Thereby shared chip external memory 300 makes and only uses two field buffer zone position between denoiser 330 and deinterleaver 340, and as shown in Figure 3, generally needs four field buffer zone position that similar function is provided in chip external memory 300.
By reducing the number of the field buffer zone position in the memory, equal disposal ability and more memory storage capability and bandwidth can be provided to extra Video processing streamline, thereby make it possible to realize high-quality Video processing or at least two channels.In addition, wide can being lowered of data carousel between double vision frequency processor 400 and the chip external memory 300 is because only single write port and two read ports can be used to provide aforementioned functional.
In some other embodiment, denoiser 330 and deinterleaver 340 can be operated many field wires in each frame simultaneously.As shown in figure 15, each in these field wires can be stored in movable field wire buffering area 1520, last movable field wire buffering area 1530 and the last movable field wire movable field wire buffering area 1510 before. Line buffering area 1510,1520 and 1530 can be the memory location that high efficiency and speed can be provided when storage and visit data in the double vision frequency processor 400.In order further to reduce amount of memory, can between denoiser and deinterleaver module, share line buffering area 1510 by denoiser 330 and deinterleaver 340 the two use.
As shown in figure 15, when playground 1262 is received by denoiser 330 and deinterleaver 340, except in conjunction with Figure 13 and 14 described being used for playground being stored into the operation in field buffer district 1312, playground 1262 also can be stored in the movable field wire buffering area 1520.This makes denoiser 330 and deinterleaver 340 can visit many movable field wires that receive simultaneously in the different time interval.Similarly, the content of storage also can be moved to corresponding line buffering area 1510 and 1530 respectively in the field buffer zone position 1301 and 1312, and line buffering area 1510 and 1530 provides the buffering to (the noise reduction output before the last playground) before last playground (the noise reduction output before the playground) and the last playground.This makes denoiser 330 and deinterleaver 340 can visit a plurality of last movable field wires and last movable field wire movable field wire before simultaneously.As the result who comprises the field wire buffering area, denoiser 330 and deinterleaver 340 can be operated many field wires simultaneously.As a result, because denoiser 330 and deinterleaver 340 shared to being stored in before the last playground in the field buffer zone position 1310, so they also can share the visit to the field wire buffering area 1510 of correspondence.This can reduce required memory space on the double vision frequency processor 400 again, perhaps substantially near double vision frequency processor 400.
Although only show three line buffering areas among Figure 15, should be appreciated that to provide the field wire of arbitrary number buffering area.The number of field wire when the number of the field wire buffering area that particularly, is provided depends on double vision frequently the amount of available memory space and/or denoiser 330 and deinterleaver 340 may need on the processor 400.But should be appreciated that to provide an arbitrary number extra noise reduction unit and the unit that deinterleaves, and comes auxiliary processing to many field wires.
For example, if each two denoisers 330 can handling three movable field wires simultaneously and deinterleaver 340 are provided, then can use the last movable field wire buffering area 1510 before eight movable field wire buffering areas 1520, six last movable field wire buffering areas 1530 and six the last movable field wires to handle many field wires, wherein the output of each field wire buffering area will be coupled to the corresponding output of denoiser and deinterleaver unit.In fact, if the space can be used on required denoiser and deinterleaver and the sheet, the content of then imagining one or more frames can be stored in the field buffer district.
Figure 16 illustrates in greater detail frame rate conversion and scalable stream waterline 550 (Fig. 5) (FRC streamline).FRC streamline 550 can comprise convergent-divergent and frame-rate conversion function at least.Particularly, FRC streamline 550 can comprise two modules in two that can be placed in scaler slot 16 30,1632,1634 and 1636, that be used for convergent-divergent at least, scaler is used to provide the convergent-divergent to first channel, and scaler is used to provide the convergent-divergent to second channel.From the description of Figure 17, will understand the advantage of this layout.In these Zoom modules in the scaler slot 16 30,1632,1634 and 1636 each can both be carried out according to any zoom ratio and zoom in or out.Scaler also can comprise the circuit that is used to carry out length-width ratio conversion, horizontal nonlinearity 3 district's convergent-divergents, interleaving and de-interleaving.Convergent-divergent can be carried out (that is, output is synchronous with input) with synchronous mode in certain embodiments, perhaps carries out (that is, output can be located in the optional position with respect to input) by chip external memory 300.
FRC streamline 550 can also comprise the function that is used for frame rate conversion (FRC).At least two in the channel can comprise the frame rate conversion circuit.In order to carry out FRC, video data should be written into storage buffer, and is read from this buffering area with the output speed of expectation.For example, it is because to read output buffer faster than incoming frame that frame rate increases, thereby makes and repeated specific frame in time.It is owing to read the speed that the speed ratio particular frame of the frame that will be output is written into slow (that is, read frame slower than input rate) from buffering area that frame rate reduces.The frame video data can may cause frame to be torn or video artifacts with reading specific frame during (that is motion video).
Particularly, for fear of the video artifacts such as frame is torn that in motion video, occurs, repeat or abandon frame and should on whole incoming frame, take place, rather than take place in the intermediate field in a frame.In other words, (that is, not providing between the horizontal or vertical sync period of picture picture) should only take place in the discontinuity in the video between frame boundaries, and do not take place in the zone of motion video.Reduce and to tear (tearless) controlling mechanism 1610 and can work the time of reading the part of a frame in the memory to alleviate discontinuity between the frame by for example control storage interface 530.FRC can be performed in normal mode or tear in the pattern in minimizing and be performed (that is, utilize to reduce and tear controlling mechanism 1610).
Be placed in in first and second channels each two scaler among two in scaler slot 16 30,1632,1634 and 1636, on frame the 3rd channel low side scaler 1640 can also be arranged.This low side scaler 1640 can be more basic scaler, for example, only carries out that 1:1 or 1:2 amplify or other must scaling arbitrarily scaler.Perhaps, one of scaler in first and second channels can be carried out convergent-divergent to the 3rd channel.In at least three channels which multiplexer 1620 and 1622 can control should be directed to in the available scaler which.For example, multiplexer 1620 can be selected the zoom operations of channel 3 in order to the execution first kind in the scaler of slot 16 30 or 1632 frames, and multiplexer 1622 can select channel 1 in order to carry out the zoom operations of second type in the scaler in slot 16 34 or 1636.Should be appreciated that a channel also can use an arbitrary number available scaler.
FRC streamline 550 can also comprise the smoothing film pattern so that reduce motion jitter.The film-mode detecting module that for example, can have the pattern of checking incoming video signal in the middle deinterleaver.If incoming video signal with first frequency (for example, 60Hz) operation, then can convert thereof into higher frequency (for example, 72Hz) or lower frequency (for example, 48Hz).In be transformed in the situation of higher frequency, can think that the FRC module provides frame to repeat index signal from the film-mode detecting module.This frame repeat index signal can in be height during first framing (for example, in the frame) of the data that can generate by deinterleaver, and (for example, four frames) are low during second framing.During to repeat index signal be high time portion, FRC can repeat a frame at frame, and the result generates correct data sequence with higher frequency.Similarly, in the situation that is transformed into the sole frequency, can provide frame to abandon index signal to the FRC module from the film-mode detecting module.During to abandon index signal be high time period, one group of specific frame in the sequence was dropped at frame, and the result has generated correct data sequence with lower frequency.
The type that depends on the convergent-divergent of expectation, shown in scaler locating module 1660, scaler can be configured to place each scaler slot 16 30,1632,134 and 1636.Scaler slot 16 32 and 1636 all is positioned at after the memory interface, although scaler slot 16 32 corresponding to the operation that first channel is carried out, and scaler slot 16 36 is corresponding to the zoom operations that second channel is carried out.As shown in the figure, a scaler locating module 1660 can comprise to be selected to dispose the multiplexer 1624 of corresponding output with specific scaler, and another scaler locating module 1660 can not comprise multiplexer, but alternatively can make the output of scaler be directly coupled to a video pipeline assembly.Multiplexer 1624 provides the flexibility that only utilizes three scaler slots to realize three kinds of operator schemes (will describe in more detail in conjunction with Figure 17).For example, if multiplexer 1624 is provided, then being positioned at scaler in the slot 16 30 can be coupled to memory and dwindle and amplify in order to provide, if and can be coupled to multiplexer 1624. and not wish storage operation, then multiplexer 1624 can be selected the output of scaler slot 16 30.Perhaps, storage operation if desired, then the scaler in the scaler slot 16 30 can be carried out convergent-divergent to data, and multiplexer 1624 can be selected from data being zoomed in or out and being placed in the data of another scaler in the scaler slot 16 32.The output of multiplexer 1624 can be provided for another video pipeline assembly then, and for example the blank time optimizer 1650, will blank time optimizer 1650 be described in more detail in conjunction with the description to Figure 18.
As shown in figure 17, scaler locating module 1660 can comprise at least: input fifo buffer 1760, connection to memory interface 530, in the slot 1730,1734 and 1736 of three scaler location at least one, write fifo buffer 1740, read fifo buffer 1750, and output fifo buffer 1770.Scaler location slot can be corresponding to the slot described in Figure 16.For example, scaler location slot 1734 can be corresponding to slot 16 30 or 1634, scaler location slot 1730 can as mentioned above, utilize multiplexer 1624 to make slot 16 30 that the function of scaler location slot 1730 and 1734 can be provided corresponding to slot 16 30 similarly.One or two scaler can be positioned in respect to memory interface 530 among any one or two in the slots 1730,1734 or 1736 of three scaler location.Scaler locating module 1660 can be the part of the arbitrary channel streamline in the FRC streamline 550.
When the expectation synchronous mode, scaler can be positioned in the slot 1730 of scaler location.In this pattern, in system, can there be FRC, this has got rid of the demand that specific FRC channel stream waterline visits memory.In this pattern, output v synchronizing signal can be locked into input v synchronizing signal.
Perhaps scaler can be positioned in the slot 1734 of scaler location.As needs FRC and should dwindle input and may wish scaler is positioned in the slot 1734 during data.Before write memory the input data are dwindled (that is, because wish less frame sign), the result has dwindled the amount of the memory stores that may need.Because less data can be stored in memory, thus the dateout read rate can be reduced, thus also reduced required total bandwidth of memory (thereby having reduced cost) and system more efficiently is provided.
In another situation, scaler can be positioned in the slot 1736 of scaler location.As needs FRC and should amplify input and may wish convergent-divergent is navigated in the slot 1736 during data.Can be data be offered memory (that is, little in output place at input frame sign ratio) than reading the low speed of dateout.As a result, by storing less frame and using scaler to increase frame sign in output place after a while, less data can be written into memory.For example, on the other hand, if scaler is positioned in before the memory, in slot 1734, and be used to amplify the input data, then bigger frame will be stored in memory, thereby need more bandwidth.But in by this situation after scaler is positioned at memory, initial less frame can be stored in memory (therefore consuming less bandwidth) and be read back after a while and be exaggerated.
Owing to may in the scaler locating module 1660 of two separation, there be two independently scaler, so for first and second channels, if on these two scaler locating modules 1660, all have the memory access demand, may be such situation then: a needs high bandwidth in them, and another may need the low bandwidth memory visit.Blank time optimizer (BTO) multiplexer 1650 can provide one or more memory buffers (even as big as storing one or more field wire), so that reduce bandwidth of memory and make the channel of arbitrary number can share the field wire of being stored, thereby reduced the memory stores demand.
Figure 18 is an example under the explanation of operation of BTO multiplexer 1650 (Figure 16).As shown in figure 18, first channel (master) has taken the major part of screen 1810, the small part of second channel (PIP) occupancy screen 1810.As a result, the memory access that the PIP channel may have less activity data and need to lack than main channel in the identical time interval, thus need less bandwidth.
For example, if a field wire in frame comprises 16 pixels, then the PIP channel may only take 4 pixels in the resultant field in this frame, and main channel may take remaining 12 pixels.Therefore, the time quantum that the PIP channel must reference to storage be handled 4 pixels is 4 double-lengths of the time quantum of main channel, thereby needs less bandwidth, shown in memory access time line 1840 (that is, PIP has long blank time at interval).Therefore, in order to reduce required bandwidth of memory, the PIP channel can be with quite low speed reference to storage, and makes main channel can use remaining bandwidth.
During reference to storage, BTO multiplexer 1650 can be configured to use multiple clock rate on different channel.For example, when when specific channel last issue is hoped slower clock rate, BTO multiplexer 1650 can utilize a kind of clock rate 1844 to come from memory access module (client) 1820 (promptly, the PIP channel) receives the data of being asked, this storage in the field wire memory buffer, and is utilized second clock speed (may be lower) 1846 references to storage.Can reduce bandwidth demand by stoping this client to utilize higher clock rate to come DASD to substitute with lower essence speed reference to storage in order to coming with the field wire buffering area.
BTO multiplexer 1650 can realize sharing different channel field wire buffering areas, and this can further reduce the required memory space of chip external memory 300 again.Like this, BTO multiplexer 1650 can use shared field wire buffering area to come the different channel of the part of warm or the shared display of covering.
The output of BTO multiplexer 1650 can be offered the color treatments warm video pipeline 560 of good channel (Fig. 5).Figure 19 shows the more detailed description to the good channel of color treatments warm (CPCB) video pipeline 560.CPCB video pipeline 560 comprises sampler 1962, visual processes and sampling module 1920 at least, covers engine 2000, auxiliary channel overlay module 1962, further advocate peace auxiliary channel convergent-divergent and processing module 1970 and 1972, signature integrating instrument 1990 and dwindle scaler 1980.
The function of CPCB video pipeline 560 can comprise the raising video signal characteristic at least, and for example, figure image intensifying that strengthens by luma and colourity edge and the film gain by blue noise shaping mask generate and add.In addition, CPCB video pipeline 560 can warm at least two channels.The output of the channel after warm can be provided warm output of three channels and the warm output of two channels with the 3rd channel is warm selectively.
As shown in figure 21, the CMU 1930 that can be included in covering engine 2000 parts of CPCB video pipeline 560 can improve at least one video signal characteristic.Video signal characteristic can comprise: the self adaptation contrast strengthens 2120, overall brightness in the image, contrast, color and luster and saturation are regulated, the local intelligent color remaps 2130, keep the constant intelligent saturation control of color and luster and brightness, by the γ control 2150 and 2160 of look-up table, and the color space conversion (CSC) 2110 that arrives the color space of expectation.
The architecture of CMU 1930 makes this CMU can receive the video channel signal 1942 of any form, and will export 1932 and convert any other form to.Can receiver, video channel signals 1942 at the CSC 2110 of CMU streamline front portion, and any possible 3-color space transformation can be become vedio color to handle space (that is, converting RGB to YCbCr).In addition, the CSC in the ending of CMU streamline can become the output 3-color space from the color treatments space conversion.Overall situation processing capacity 2140 can be used for regulating brightness, contrast, color and luster and/or saturation, and can be shared with output CSC.Because CSC and overall processing capacity 2140 are carried out the matrix multiplication operation, so two matrix multipliers can be combined into one.This sharing can be performed by the final coefficient that calculates in advance after two matrix multiplication operations of combination.
CPCB video pipeline 560 also can may need to provide dithering process to a given number bit according to display device.Also can be provided at least one the interleaver in the channel output.CPCB video pipeline 560 also can be at can at least one in the channel displayed output generating control output (Hsync, Vsync, field) on equipment.In addition, CPCB video pipeline 560 can be at least one separating luminance in the output channel, contrast, color and luster and saturation The global regulation, and provides extra convergent-divergent and FRC in the output channel at least one.
Refer again to Figure 16 and 19, export 1656,1652 and 1654 from the channel of FRC streamline 550 and be provided for CPCB video pipeline 560.First channel 1656 can be processed along first path, this first path can use sampler 1910 so that the vision signal on first channel 1656 is carried out up-sampling, and the output 1912 of sampler 1910 can be provided for main channel overlay module 1960 and auxiliary channel overlay module 1962 the two produce at least one output through warm image.Second channel 1652 can be processed along second path, and this second path provides visual processes and sampling module 1920.The output of this visual processes and sampling module 1920 (its can to adopting on the vision signal) can be imported into video overlay module 1940 (perhaps covering engine 2000) and utilize this output warm or locate the 3rd channel 1654 (the 3rd channel 1654 also can pass through sampler 1910).The function that covers engine 2000 will be described in more detail in conjunction with Figure 20.
The output 1942 that video covers (it can be that the first video channel signal 1623 covers with the second video channel signal 1625) can be provided for main channel overlay module 1960 by CMU 1930, and can be provided for multiplexer 1950.Except the output 1942 that receiver, video covers, multiplexer 1950 can also receive the output of visual processes and sampling module 1920 and sampler 1910.Multiplexer 1950 work select one of its vision signal input to offer auxiliary channel overlay module 1962.Perhaps, multiplexer 1951 can be selected the output 1932 of the output of multiplexer 1950 or CMU 1930 to be used as vision signal output 1934 and offer auxiliary channel overlay module 1962.This layout of the auxiliary channel overlay module of advocating peace processing unit before makes identical vision signal will be provided for the main channel overlay module and also is provided for the auxiliary channel overlay module.After the further processing of unit 1970 and 1972, identical vision signal (VI) can be simultaneously by 1) in main output 1974 output to show as main output signal, and 2) further through dwindling processing, output is shown or stores as auxiliary output signal in auxiliary output 1976 then.
In order to provide to going to the independent control that the two data of main output 1974 and auxiliary output 1976 are selected, can form main channel and auxiliary channel by selecting the first and second video channel signals 1932 and 1934 independently from the first and second video channel overlay modules 1940.Auxiliary channel overlay module 1962 can be selected the first video channel signal 1652, the second video channel signal 1654 or the first and second video channel signals 1942 through covering.Because CMU 1930 is applied to the first video channel signal 1652, have identical or different colors so depend on the first and second video channel signals, the second video channel signal 1654 can be before CMU 1930 or device 1951 selections that are re-used afterwards.In addition, the first and second video channel signals 1932 and 1934 can be warm with the 3rd video channel signal 1656 independently.
CPCB video pipeline 560 can also provide by the convergent-divergent and the FRC that are used for auxiliary output 1976 that dwindle scaler 1980 representatives.This function may be essential, so that provide and the main output 1974 auxiliary outputs 1976 that separate.Owing to higher frequency clock should be elected to be the convergent-divergent clock, so CPCB video pipeline 560 can adopt main output clock, because the auxiliary clock frequency may be smaller or equal to the frequency of master clock.Dwindle scaler 1980 and also can have the ability that produces interleaving data, this interleaving data can format and be output as auxiliary output through FRC and dateout.
In some cases, when first channel is that SDTV vision signal and main output 1974 should be that the auxiliary simultaneously output 1976 of HDTV signal is should be the SDTV vision signal time, CMU 1930 can convert the first channel SD vision signal to the HD video, carries out the HD color treatments then.In this case, multiplexer 1950 can select vision signal 1942 (may be not by CMU1930 signal) as its output, thereby provide HD signal to main channel overlay module 1960, and provide treated SDTV signal to auxiliary channel overlay module 1962.Auxiliary channel convergent-divergent and the processing module 1972 of further advocating peace can be carried out the color control that is used for auxiliary output 1976.
In some other situations, when first channel is that HDTV vision signal and main output 1974 should be that HDTV signal and auxiliary output 1976 are should be the SDTV vision signal time, CMU 1930 can carry out HD and handle, and multiplexer 1951 can select the output of CMU 1932 to come to provide the signal of handling through HDTV to auxiliary channel overlay module 1962.Auxiliary channel convergent-divergent and the processing module 1972 of further advocating peace can be carried out color and control color space is changed over the SDTV that is used for auxiliary output 1976.
Some advocate peace auxiliary output 1974 and 1976 the two all should be in other situations of SD vision signal, further advocate peace auxiliary channel convergent-divergent and processing module 1970 and 1972 can be carried out the condition that color controlled function similarly satisfies signal to output to the auxiliary output 1974 and 1976 of advocating peace accordingly.
Should be appreciated that if video channel does not use the specific part streamline among arbitrary streamline segment 540,550,560 and 570 (Fig. 5), then this another video channel of part configuration cause can be used for strengthening video quality.For example, if second video channel 1264 does not use the deinterleaver 340 in the FRC streamline 550, then first video channel 1262 can be configured to use the deinterleaver 340 of the second video channel streamline so that improve its video quality.As described in conjunction with Figure 15, the quality (for example, 6 field wire processing simultaneously) that extra denoiser 330 and extra deinterleaver 340 can improve particular video signal by allowing shared storage streamline segment 1260 to handle extra field wire simultaneously.
Some the example output formats that can utilize CPCB video pipeline 560 to provide comprise: national television system committee of identical input picture (NTSC) and the output of line-by-line inversion (PAL) main and secondary; The output of the HD of identical output image and SD (NTSC or PAL) main and secondary; First channel image and two kinds of different outputs that second channel image is provided in auxiliary output are being provided in the main output; Channel video signal (first channel or second channel) in first and second channel video signals in the main output and auxiliary output through covering; The main output and the warm factor of different OSD (α value) of assisting on exporting; Independently brightness in main output and the auxiliary output, contrast, color and luster and saturation are regulated; Different colours space (for example, at the Rec.709 of main output with at the Rec.601 that assists output) at the auxiliary output of advocating peace; And/or by the sharper keen/level and smooth image in the auxiliary output of using not zoom factor on the same group to obtain on the first channel scaler and the second channel scaler.
Figure 20 illustrates in greater detail and covers engine 2000 (Figure 19).Cover engine 2000 and comprise video overlay module 1940, CMU 1930, first and second channel parameters 2020 and 2030, selector 2010 and main M plane overlay module 2060 at least.Should be appreciated that main M plane overlay module 2060 and main channel overlay module 1960 (Figure 19) are similar, still can comprise can be used to other channel video signals 2040 and the 3rd channel are imported the additional functionality that 1912 (Figure 19) are warm or cover.
Covering engine 2000 can place M available, independent vide/graphics plane and generate single video channel stream by drawing in final displayed map.In a particular embodiment, cover engine 2000 and can generate single channel stream by drawing 6 planes of placement in final displayed map.The position of each plane on display screen can be configurable.The priority on each plane also is configurable.For example, if the plane is capped in the position that displayed map draws, then can uses priority rank to solve which plane and should be placed on topmost, which plane can be hidden.Also can use and cover the optional edge of assigning each plane.
The example of other video channel signals 2040 and their source can comprise: the primary flat that can be the first channel video signal 1652; It can be the PIP plane of the second channel video signal 1654; Can be to utilize the character OSD plane that character OSD maker is generated on the sheet; Can be to utilize the position to be mapped to the position that the OSD engine generated to be mapped to the OSD plane.The OSD image can be stored in the memory, and the various positions that can use memory interface to fetch in the memory are mapped to storage object in advance, and they are placed on the picture that also can be stored in the memory.Carry out format conversion in the object that memory interface also can be asked fetching.The OSD engine of position mapping can read the picture of being stored according to raster scan order, and sends it to overlay module.Extra video channel signal 2040 can comprise: cursor OSD plane, and this cursor OSD plane can be generated by cursor OSD engine, and can use less on-chip memory to come the bitmap than small object of storage class like cursor; From the outside OSD plane that external source receives.Outside OSD engine can send out grating control signal and read clock.Outside OSD can use these control signals as benchmark in the source, and sends data according to scanning sequency.These data can be routed to overlay module.If outside OSD plane is enabled, then can use flexible port to receive outside osd data.
Overlay module 1940 before the CMU 1930 can cover first video channel Liu 1653 and second video channel stream 1655.Thereby overlay module 1940 flows the module that does not need repetition in CMU by allowing 1930 pairs of single video flowing operations of CMU for a plurality of video channels, and CMU1930 is carried out more efficiently.Except providing to CMU 1930 the single video channel signal 1942, overlay module 1940 can also provide part (promptly to CMU 1930, by pixel ground) designator 1944, this designator 1944 is designated video section and belongs to first video channel stream or second video channel stream.
Can provide with first video channel stream, 1653 and second video channel and flow 1655 corresponding two groups of programmable parameters 2020 and 2030.Selector 2010 can use part designator 1944 to select programmable parameter to offer CMU 1930.For example, the part that 1944 indications are handled by CMU 1930 as the fruit part designator belongs to first video channel stream 1653, and then selector 2010 can provide with first video channel to CMU 1930 and flow 1653 corresponding programmable parameters 2020.
May there be layer with the number similar number of video plane.The 0th layer can be the bottom, and layer subsequently can have the layer index that increases progressively.These layers may not have size and position characteristic, but their orders that should be stacked can be provided.Thereby covering engine 2000 can move up since the 0th layer and mix these layers.The 1st layer can at first utilize the warm factor that is associated with the video plane that is placed on the 1st layer and by with the 0th layer warm.The 0th layer and the 1st layer of warm output can be then by with the 2nd layer warm.The operable warm factor can be the warm factor that is associated with the plane that is placed on the 2nd layer.The warm output of the 0th layer, layers 1 and 2 then can by with the 3rd layer warm, and the like layer to the last also mixed.It is warm to should be appreciated that layer that those skilled in the art can select combination in any comes, and does not break away from instruction of the present invention.For example, the 1st layer can by with the 3rd layer warm, then by with the 2nd layer warm.
Although it is also understood that in conjunction with main output channel and described covering engine 2000, also can make amendment the M plane that on auxiliary output channel, provides utilization to cover engine 2000 to cover to color treatments and the warm streamline 560 of channel.
Figure 22 illustrates in greater detail the backend pipeline level 570 of video pipeline.Backend pipeline level 570 can comprise main output format device 2280, signature integrating instrument 1990, auxiliary output format device 2220 and right to choose 2230 at least.
Backend pipeline level 570 can be assisted output to advocating peace, and the two carries out output formatization, and can generate the control output (Hsync, Vsync, field) of exporting as auxiliary.Backend pipeline level 570 can be assisted the realization digital and analog interface.Main output format device 2280 can receive treated main video channel signal 1974, and generates corresponding main output signal 492a.Auxiliary output format device 2220 can receive treated auxiliary video channel signals 1976, and generates corresponding auxiliary output signal 492b.Signature integrating instrument 1990 can receive auxiliary video channel signals 1976 and accumulative total and the signal relatively accumulated between difference judge the video signal quality of outputting video signal, and this information can be offered processor if necessary and change system parameters.
Before being formatd for output 492b, auxiliary video channel signals 1976 can also be provided for CCIR 656 encoder (not shown).CCIR 656 encoders can be carried out any essential coding makes signal satisfy the condition of External memory equipment or some other suitable devices.By using selector 2230 to select bypass auxiliary video channel signals 2240, can be not encoded or formative situation under auxiliary video channel signals 1976 is provided as output signal 492b.
Interleaving block (not shown) in the backend pipeline level 570 also can be provided.If input signal is interleaved, then it at first can be converted to continuous by deinterleaver 340 (Figure 13).Deinterleaver may be essential, because in all possible worker's continuous domain of all subsequent module in the video pipeline level.If the output of expectation through interweaving then can be opened the interleaver in the backend pipeline level 570 selectively.
The interleaver module can comprise such memory at least, and this memory arrives enough pixels of at least two lines of storage greatly, and can revise if necessary and store entire frame.Input can utilize continuous timing and be written in the memory continuously.With the timing that timely locks continuously through interweaving can be according to half generation of pixel rate.Can utilize this timing from memory read data through interweaving.In odd field, the even number field wire can be lost, in even field, the odd number field wire can be lost.This can produce again and be suitable for the output through interweaving used with given equipment.
Therefore, as seen, provide apparatus and method, be used to utilize shared storage device that a plurality of high-quality video channel stream are provided.One of skill in the art will appreciate that the embodiment that can utilize except that the foregoing description realizes the present invention, providing the foregoing description is that the present invention is only limited by appended claims for illustration purpose rather than restriction.